title
string
paper_decision
string
review_1
string
rebuttals_1
string
review_2
string
rebuttals_2
string
review_3
string
rebuttals_3
string
review_4
string
rebuttals_4
string
global_rebuttals
string
dataset_source
string
conference_year
int64
review_5
string
rebuttals_5
string
review_6
string
rebuttals_6
string
review_7
string
rebuttals_7
string
review_8
string
rebuttals_8
string
Removing Hidden Confounding in Recommendation: A Unified Multi-Task Learning Approach
Accept (poster)
Summary: The paper addresses that there are hidden confounders in recommendations and existing methods cannot handle them well. \ The paper proposes a unified multi-task learning approach to tackle that problem. \ Specifically, they devise a residual network to calibrate the propensity and the imputed error by using unbiased data. Strengths: 1. The paper is well-organized and presented. 2. Each loss function is technically solid. - The proposed method is end-to-end multi-task learning with a debiasing residual network that simultaneously deals with selection bias, data sparsity, and hidden confounding. 3. The proposed method is validated through experiments. - They adopt three real-world datasets, three metrics, and extensive baselines. Weaknesses: 1. Motivation is weak. - Hidden confounders are assumed to exist. There is no theoretical or experimental evidence for that. - Proofs (Proposition 1, Theorem 2) are naive. They are proved on the assumption that is already the conclusion of the proofs. 2. Proposed end-to-end framework is not novel. - There is already an end-to-end multi-task learning framework [7] removing hidden confounders with biased and unbiased data. - Loss functions are from the existing methods. Most of them are already adopted in a multi-task manner in ESCM^2 [33]. - The unique contribution of this paper relies on unbiased data, which is hard to obtain in real-world applications. 3. Missing related work - Balancing Unobserved Confounding with a Few Unbiased Rating in Debiased Recommendations, WWW 2023. - StableDR: Stabilized Doubly Robust Learning for Recommendation on Data Missing Not At Random. ICLR 2023. - Multiple Robust Learning for Recommendation. AAAI 2023. - This work seems to tackle the same problem as the proposed work and needs to be included in the manuscript. 4. Notations are confusing - There are no explicit loss forms. For example, what is $L^{B}_{CVR}$? - The argument of loss functions is not organized. I cannot distinguish which is the training parameter and which is the fixed parameter. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: There are many hyperparameters for the loss functions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for the helpful suggestions. **Below, we hope to address your concerns and questions to improve the clarity and quality of our paper.** > **W1:** Motivation is weak. - Hidden confounders are assumed to exist. There is no theoretical or experimental evidence for that. **Response to W1:** We thank the reviewer for pointing out this issue. Theoretically, the assumption of "unconfoundedness", i.e., the conditional independence of treatment and outcome given the covariates, cannot be tested with observational data, and thus we cannot theoretically prove the existence of hidden confounders [1]. However, experimentally, as summarized in a recent survey paper [2] on causal recommendation, a large amount of empirical evidence for the existence of hidden confounders has been explored. - Proofs (Proposition 1, Theorem 2) are naive. They are proved on the assumption that is already the conclusion of the proofs. **Response to W1:** We agree with the reviewer that it is not hard to derive Proposition 1 and Theorem 2, thus it should not be regarded as the core contribution of this paper. However, it provides us a very intuitive way to understand the motivation of calibrating the learned nominal propensities and nominal error imputations by a multi-task learning approach using unbiased data. > **W2:** Proposed end-to-end framework is not novel. - There is already an end-to-end multi-task learning framework [7] removing hidden confounders with biased and unbiased data. **Response to W2:** The reviewer raises an interesting concern. However, we have reflected this comment in Section 3.3, where we discuss the difference between our work and [7]. In summarize, [7] proposed to adopt sensitivity analysis from the causal inference literature to minimize the worst-case prediction loss. Whereas our work propose to tackle hidden confounding by a multi-task learning approach using unbiased data to calibrate learned nominal propensities and nominal error imputations. - Loss functions are from the existing methods. Most of them are already adopted in a multi-task manner in ESCM^2 [33]. **Response to W2:** We thank the reviewer for pointing out this issue. However, we would like to emphasize that (a) the motivation and problem set are different: the main purpose of our work is to address unobserved confounding using unbiased data, whereas ESCM^2 only uses biased data to address selection bias; (b) the model setup is different: our work novelly introduces a residual imputation model and a residual propensity model to calibrate the learned biased nominal propensities and biased nominal error imputations due to unobserved confounding. - The unique contribution of this paper relies on unbiased data, which is hard to obtain in real-world applications. **Response to W2:** Thank you for the comment. We note that much recent work proposes to use a small amount of unbiased ratings to address selection bias, but fails to address unobserved confounding [3, 4]. Empirically, we add experiments in a supplemental one-page pdf that verify that even if the unbiased scoring has much much smaller scale (<0.1\%), the proposed method still has promising performance. To gather unbiased ratings, we may ask users to rate randomly selected items. This way, the propensities of observing different ratings are the same and the observed ratings are thus unbiased. > **W3:** Missing related work. - Balancing Unobserved Confounding with a Few Unbiased Rating in Debiased Recommendations, WWW 2023. **Response to W3:** First, we would like to kindly remind the reviewers that the work mentioned was published online in 30 April 2023, see: https://dl.acm.org/doi/abs/10.1145/3543507.3583495, while our submission was in May, which is in line with NeurIPS submission policy for baseline comparisons (due to the very close time). **We add comparisons that use this work as a baseline, and the experimental results validate the superiority of our proposal (please kindly refer to the one-page supplementary pdf for the detailed experimental results).** - StableDR: Stabilized Doubly Robust Learning for Recommendation on Data Missing Not At Random. ICLR 2023. - Multiple Robust Learning for Recommendation. AAAI 2023. **Response to W3:** We thank the reviewers for the useful suggestions. We kindly remind the reviewer that though both of them aim to combat the selection bias, neither of them tacking the effect of unobserved confounding. And as suggested by the reviewer, we will include them in our manuscript. **We also add comparisons that use these works as baselines in our one-page supplementary pdf.** > **W4:** Notations are confusing. - There are no explicit loss forms. For example, what is $L_{CVR}^B$? **Response to W4:** Please kindly refer to line 108.5 in our original manuscript, $L_{CVR}^B$ is either $L_{IPS}$ or $L_{DR}$. - The argument of loss functions is not organized. I cannot distinguish which is the training parameter and which is the fixed parameter. **Response to W4:** The reviewer raises an interesting concern. Following ESCM^2, our work uses multi-task learning to simultaneously train the propensity model, the CVR model, and the imputation model. *** **We hope the above discussion will fully address your concerns about our work, and we would really appreciate it if you could be generous in raising your score.** We look forward to your insightful and constructive responses to further help us improve the quality of our work. Thank you! *** **References** [1] Imbens, Guido W., and Donald B. Rubin. Causal inference in statistics, social, and biomedical sciences. 2015. [2] Luo, Huishi, et al. "A Survey on Causal Inference for Recommendation." 2023. [3] Wang, Xiaojie, et al. "Combating selection biases in recommender systems with a few unbiased ratings." WSDM, 2021. [4] Chen, Jiawei, et al. "AutoDebias: Learning to debias for recommendation." SIGIR, 2021. --- Rebuttal Comment 1.1: Title: Review response Comment: Thank you for your response! It really helps me understand the manuscript. After I read your responses, I decided to raise my score from 3 to 4. Still, I cannot find any unique contribution of this paper, so I did not raise the score to the acceptance side. In my humble opinion, the contribution is somewhat incremental. --- Reply to Comment 1.1.1: Title: Thank you for your constructive comments and raising the score! Comment: We are glad to know that many concerns in your original comments have been effectively addressed. We are very grateful for your constructive comments and questions, which helped improve the clarity and quality of our paper. We will provide more clarifications and explanations in the revised version. Thanks again!
Summary: This paper presents a critical examination of prevalent debiasing methods in recommendation systems and their limitations in addressing hidden confounding factors. The authors underline that current methods—propensity-based, multi-task learning, and bi-level optimization—fail to mitigate selection bias when unobserved confounding variables exist. In response, they propose a unified multi-task learning approach that incorporates a small set of unbiased ratings to calibrate nominal propensities and error imputations, thereby reducing the influence of hidden confounding factors. The approach utilizes a newly introduced consistency loss for calibration, advancing the field's theoretical understanding of bias and confounding in recommendation systems. The paper further validates its method through comprehensive experiments on three public benchmark datasets, including a large-scale industrial dataset. The outcomes affirm the efficacy of their approach in countering selection bias and hidden confounding. The study hence significantly contributes to the realm of recommender systems, revealing theoretical shortcomings in current debiasing approaches, and offering a novel, robust method for reducing bias in practice. Strengths: 1. Proposed methods show quite significant improvements over the strong baselines. 2. Good to interpret each term in the equations. Very helpful for readers to intuitively understand the equations. 3. Code is attached, to be published with the paper. 4. Extended and complete experiments conducted on real-world datasets including a large-scale industrial dataset with many baselines and ablation studies. Weaknesses: 1. Reference numbers for equations are missing. Please add the reference numbers to make referring more easily. 2. Where is Theorem 1? Or why not Lemma 1, Proposition 2, and Theorem 3? 3. Please refer to the proofs in the Appendix in the main text when the corresponding theorem or lemma are given. 4. Equation of L_{CVR}^{B&U} in Sect 4.4 is not very well-defined. The cross-entropy loss \delta(., .) is defined for binary random variables, however, neither L_{CVR}^B or L_{CVR}^U is a binary random variable. Please spell out the specific definition, as it’s also one of the main contribution. 5. Statistical significance analysis is missing. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How good will the method be useful on real-world products? Assuming it is possible to get unbiased labels from evaluators, but usually at a much much smaller scale (<0.1%), and sparser (no consistent history of a user). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: 1. What are the assumptions about the hidden confounding? The main new term introduced L_{CVR}^{B&U} is not sufficiently discussed. If I understand correctly, the main assumptions are that the unbiased set has no hidden confounding and aligning the distribution of prediction on the biased set to that of the unbiased set should calibrate / de-confound the predictions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for the helpful suggestions. **Below, we hope to address your concerns and questions to improve the clarity and quality of our paper.** Below we categorized the reviewers' concerns into **Methodology**, **Experiments**, and **Clarity**. ### **Methodology** > **W4:** Equation of L_{CVR}^{B&U} in Sect 4.4 is not very well-defined. The cross-entropy loss \delta(., .) is defined for binary random variables, however, neither L_{CVR}^B or L_{CVR}^U is a binary random variable. Please spell out the specific definition, as it’s also one of the main contribution. **Response to W4:** We apologize for the lack of clarity and its caused misunderstanding by the reviewer. **In fact, we use \delta(. , .) to denote generic loss functions** in our manuscript and **are not limited to cross-entropy losses**. The reviewer is correct that **for L_{CVR}^{B&U}, we use the square of the difference between L_{CVR}^B and L_{CVR}^U.** This can also be verified in our released codes in the supplementary material. Instead, when defining the ideal loss, since this work considers binary ratings, we use cross-entropy loss in our experiments. We will clarify it in our revised version. > **Limitations:** What are the assumptions about the hidden confounding? The main new term introduced L_{CVR}^{B&U} is not sufficiently discussed. If I understand correctly, the main assumptions are that the unbiased set has no hidden confounding and aligning the distribution of prediction on the biased set to that of the unbiased set should calibrate / de-confound the predictions. **Response to Limitations:** With respect to the assumptions about hidden confounding, **the reviewer is correct: the main assumptions are that the unbiased set has no hidden confounding.** Nevertheless, we would like to clarify that this is a truth for unbiased datasets, instead of an assumption about hidden confounding [1, 2]. To gather unbiased ratings, we may ask users to rate randomly selected items. This way, the propensities of observing different ratings are the same and the observed ratings are thus unbiased. For the term L_{CVR}^{B&U}, **the reviewer is also correct: we align the distribution of prediction on the biased set to that of the unbiased set to calibrate / de-confound the predictions.** The core idea to tackle the hidden confounding by a multi-task learning approach is **the using of unbiased data to calibrate learned nominal propensities and nominal error imputations,** which motivates our design for the consistency loss L_{CVR}^{B&U}. ### **Experiments** > **W4:** Statistical significance analysis is missing. **Response to W4:** As suggested by the reviewer, we add the statistical significance test in the attachment PDF. The result shows that Res-IPS and Res-DR outperform other baseline methods. > **Questions:** How good will the method be useful on real-world products? Assuming it is possible to get unbiased labels from evaluators, but usually at a much much smaller scale (<0.1%), and sparser (no consistent history of a user). **Response to Questions:** Thanks for the question. Actually, we conduct the unbiased data ratio experiments from **2% to 10%** in the submitted paper. As suggested by the reviewer, we add a unbiased data ratio experiments from **0.05% to 10% with more subtle step size** in the attachment PDF. It shows that some of the baseline methods (like autodebias on KuaiRec) performance can drop off dramatically with very little unbias data. However, Res-IPS and Res-DR still outperform the baseline methods with varying unbiased data ratio. Please kindly refer to the attachment PDF for more details. ### **Clarity** > **W1:** Reference numbers for equations are missing. Please add the reference numbers to make referring more easily. **Response to W1:** Thanks for your helpful suggestion, and we will add reference numbers for equations in our revised manuscript. > **W2:** Where is Theorem 1? Or why not Lemma 1, Proposition 2, and Theorem 3? **Response to W2:** We thank the reviewer for pointing out this issue, it seems our template generates the wrong number of lemmas, propositions, and theorems, and we apologize to this expected issue and will adjust the numbering of lemmas, propositions, and theorems in the revised version. > **W3:** Please refer to the proofs in the Appendix in the main text when the corresponding theorem or lemma are given. **Response to W3:** We agree with you and will add the "please refer to the proofs in the Appendix" in our main text when the corresponding theorem or lemma are given. Thanks again for helping us to improve the clarity of our paper. *** **We hope the above discussion will fully address your concerns about our work, and we would really appreciate it if you could be generous in raising your score.** We look forward to your insightful and constructive responses to further help us improve the quality of our work. Thank you! *** **References** [1] Wang, Xiaojie, et al. "Combating selection biases in recommender systems with a few unbiased ratings." Proceedings of the 14th ACM International Conference on Web Search and Data Mining. 2021. [2] Chen, Jiawei, et al. "AutoDebias: Learning to debias for recommendation." Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2021. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed responses. Given the additional experimental results on statistical significance and ratio of unbiased data in the appended pdf, I will raise my rating to 5. --- Reply to Comment 1.1.1: Title: Thank you for your constructive comments and raising the score! Comment: We are glad to know that your concerns have been effectively addressed. We are very grateful for your constructive comments and questions, which helped improve the clarity and quality of our paper. Thanks again!
Summary: This paper highlights the prevalent issue of selection bias in recommender systems, emphasizing the often-overlooked aspect of hidden confounding. Existing approaches and their limitations are discussed, with a special focus on hidden confounders. The authors then introduce a unified multi-task learning approach which utilizes a small set of unbiased ratings to calibrate nominal propensities and error imputations. This approach aims to handle hidden confounding, thereby achieving unbiased learning. Strengths: - The paper does an excellent job of dissecting existing propensity-based and multi-task learning methods theoretically to demonstrate how they can lead to biased learning in the presence of hidden confounding. - Although the multi-task learning approach for debiasing is not novel, the specific setting of calibrations on unbiased data is novel. It uniquely uses a few unbiased ratings to calibrate learned propensities and imputed errors from biased data, aiming to eliminate the biases caused by hidden confounding. The unified loss is also justified by theoretical analysis. - The proposed methods are extensively validated on three publicly available benchmark datasets, including a large-scale industrial dataset. This broad empirical evidence strengthens the claims made in the paper. Weaknesses: - The theoretical result is weak. It states that if the consistency loss is zero then the calibrated loss is unbiased. While the result does help justify the design of the loss function, it does not provide any guarantee regarding the unbiasedness of the learned model. The consistency loss might not reach zero or even be small enough to reduce the bias of the calibrated loss. - The overall model contains multiple loss components and multiple parameters. While the authors point out the potential issues of the existing minimax framework, the proposed solution requires more parameter tuning. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: In Figure 2, the higher weight of the consistency loss actually reduces the performance of the model which is not very intuitive. Based on the propositions, reduced consistency loss leads to unbiased calibration loss. With higher weight, we expect smaller consistency loss and in general a better calibration loss. Why do we observe a regression in the AUC? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for the helpful suggestions. **Below, we hope to address your concerns and questions to improve the clarity and quality of our paper.** > **W1:** The theoretical result is weak. It states that if the consistency loss is zero then the calibrated loss is unbiased. While the result does help justify the design of the loss function, it does not provide any guarantee regarding the unbiasedness of the learned model. The consistency loss might not reach zero or even be small enough to reduce the bias of the calibrated loss. **Response to W1:** Thanks for your comments. We are glad that the reviewer agree that the theoretical results do help justify the design of the loss function. Actually, all the theoretical results are intended to provide us with a very intuitive way to understand the design of each component of the proposed method, which is important to make the presentation clearer. We fully agree with the reviewer that the consistency loss might not reach zero or even be small enough to reduce the bias of the calibrated loss. However, we would like to clarify that **this is a general problem with multi-tasking learning and is not unique to our method**, see references [1, 2, 3]. Actually, **all the methods cannot guarantee the exact zero for the loss functions.** For example, propensity-based methods are unbiased when the propensity model is correctly specified, but no methods can guarantee and verify they obtain 100\% accurate estimation of propensities. In summary, the proposed method provides a framework to combat the risk of hidden/unmeasured confounding. In particular, **the consistency loss provides a ideal and natural optimization direction to debias in the presence of hidden/unmeasured confounding, and with theoretical guarantees when its value is zero.** > **W2:** The overall model contains multiple loss components and multiple parameters. While the authors point out the potential issues of the existing minimax framework, the proposed solution requires more parameter tuning. **Response to W2:** Thanks for your comments. Indeed, we agree with the reviewer that the proposed methods requires more parameter tuning. However, we would like to clarity that due to the usage of the unbiased dataset and the carefully designed model structure in the proposed method, it is easy to obtain several hyper-parameter combination of $\alpha, \beta, \gamma$ with promising performance in practice. Actually, the proposed method mainly adds only an additional hyper-parameter $\gamma$ empirically. This is because we can first implement ESCM$^2$ method (the state-of-the-art multi-task learning method for combating the selection bias) and then take it as a pre-trained model. Thus, **we essentially have only one hyper-parameter ($\gamma$) instead of three, which is easy to implement (Please kindly refer to our codes in the Supplementary Material).** > **Q1:** In Figure 2, the higher weight of the consistency loss actually reduces the performance of the model which is not very intuitive. Based on the propositions, reduced consistency loss leads to unbiased calibration loss. With higher weight, we expect smaller consistency loss and in general a better calibration loss. Why do we observe a regression in the AUC? **Response to Q1:** We thank the reviewer for pointing out this issue and apologize for the lack of clarity. As shown in Figure 2, our method performs best when $\gamma$ is moderate. Here are the reasons. - **When $\gamma$ is too large**, **though a higher weight can mitigate the shift between training set and testing set caused by unobserved confounding, it hurts the performance of other tasks** according the loss function $L_{Res}$ in lines 179-180. This is becase much attention is paid to $L_{CVR}^{B\\&U}$ (which strictly relies on $L_{CVR}^{B}$ and $L_{CVR}^{U}$), whereas the other loss functions (including $L_{CTR}$, $L_{IMP}$, $L_{CVR}^{B}$, $L_{CTCVR}^{B}$, and $L_{CVR}^{U}$) cannot be sufficiently optimized, which prevents the debiased learning of the CVR model (via $L_{CVR}^{B}$, $L_{CTCVR}^{B}$, and $L_{CVR}^{U}$), accurate learned propensities (via $L_{CTR}$), and accurate imputed errors (via $L_{IMP}$) from being realized, leading to sub-optimal prediction performance. - **When $\gamma$ is too small**, it makes this balance loss $L_{CVR}^{B\\&U}$ be paid with less attention, so that **the residuals are no longer sufficiently updated to combat the unobserved confounding.** Therefore, our methods perform best when $\gamma$ is moderate. *** **We sincerely thank you for your feedback and will provide more clarifications and explanations in the revised version, and welcome any further technical advice or questions on this work and we will make our best to address your concerns.** ___ **References** [1] Ma, Xiao, Liqin Zhao, Guan Huang, Zhi Wang, Zelin Hu, Xiaoqiang Zhu, and Kun Gai. Entire space multi-task model: An effective approach for estimating post-click conversion rate. In SIGIR, 2018. [2] Zhang, Wenhao, Wentian Bao, Xiao-Yang Liu, Keping Yang, Quan Lin, Hong Wen, and Ramin Ramezani. Large-scale causal approaches to debiasing post-click conversion rate estimation with multi-task learning. In WWW, 2020. [3] Wang, Hao, Tai-Wei Chang, Tianqiao Liu, Jianmin Huang, Zhichao Chen, Chao Yu, Ruopeng Li, and Wei Chu. ESCM2: Entire Space Counterfactual Multi-Task Model for Post-Click Conversion Rate Estimation. In SIGIR, 2022.
Summary: This paper studies unbiased learning in recommendation systems in the presence of hidden confounding. The authors first theoretically analyze the limitations of previous MTL methods and those combine some unbiased data and then design a unified MTL debiasing method by calibrating the learned nominal propensities and error imputations using a novel consistency loss. Extensive experiments on benchmark datasets have shown the effectiveness of their proposed method. Strengths: 1. The work is well-motivated with the analysis of the limitations of existing work in the presence of hidden confounding. 2. The presentation of the method design is clear and vivid. Each part of the optimization goal is stated in a good structure. 3. Extensive experiments of comparison with various baselines and in-depth analysis are persuasive, which shows the effectiveness of the designed method and the function of each component. Weaknesses: The novelty of this work lies in the proposed consistency loss that utilizes unbiased data to calibrate the learned nominal propensities and imputed errors from the biased data. However, the theoretical analysis of this term seems trivial (Proposition 3). Maybe the trade-off between different terms in $\mathcal{L}_{Res}$ can be discussed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: How do you tune the hyperparameters $\alpha, \beta, \gamma$? Do you use a validation set? If so, how do other baselines use the validation set? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have a discussion of the limitations in the Conclusion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for the helpful suggestions. **Below, we hope to address your concerns and questions to improve the clarity and quality of our paper.** > **W1:** The novelty of this work lies in the proposed consistency loss that utilizes unbiased data to calibrate the learned nominal propensities and imputed errors from the biased data. However, the theoretical analysis of this term $\mathcal{L}_{res}$ seems trivial (Proposition 3). **Response to W1:** We agree with the reviewer that it is not hard to derive Proposition 3, thus it should not be regarded as the core contribution of this paper. However, **it provides us a very intuitive way to understand the design of $\mathcal{L}_{CVR}^{B\\&U}$, which is important to make the presentation clearer.** > **W2:** Maybe the trade-off between different terms in $\mathcal{L}_{res}$ can be discussed. **Response to W2:** We thank the reviewer for pointing out this issue. Please kindly refer to the discussion of the trade-off between different terms in $\mathcal{L}_{res}$ in below. - First, **we perform the sensitivity analysis of $\gamma$, which is the weight for $\mathcal{L}_{CVR}^{B\\&U}$ in Res-IPS and Res-DR.** The associated results are displayed in Figure 3, which indicates that our methods perform best when $\gamma$ is moderate. - This is because when $\gamma$ is too large, it hurts the performance of other tasks (e.g., debiased CVR model training), and when $\gamma$ is too small, it makes this balance loss be paid with less attention, so that the residuals are no longer sufficiently updated. - Meanwhile, **we conduct ablation studies for Res-IPS and Res-DR, with respect to the residual components and the training losses, respectively.** The results are shown in Table 3 and Table 4, from which one can see that our methods reach the best performance when both two losses are preserved. > **Q1:** How do you tune the hyperparameters $\alpha, \beta, \gamma$? **Q2:** Do you use a validation set? If so, how do other baselines use the validation set? **Response to Q1 and Q2:** We thank the reviewer for raising this question. **Yes, we use a validation set to tune the hyper-parameters** and to decide the stop criterion **for all baseline methods.** Specifically, **for the stop criterion,** the training process is finished **when the predicted AUC value on the validation set no longer increases.** Also, **for tuning the hyper-parameters,** we tune $\alpha$ in $\\{0.1, 0.5, 1\\}$, $\beta$ in $\\{0.1, 0.5, 1, 5, 10\\}$, and $\gamma$ in $\\{0.001, 0.005, 0.01, 0.05, 0.1\\}$ and use **grid search** to choose the hyper-parameters **which can achieve the highest AUC value on the validation set.** *** **We sincerely thank you for your feedback and will provide more clarifications and explanations in the revised version, and welcome any further technical advice or questions on this work and we will make our best to address your concerns.**
Rebuttal 1: Rebuttal: We sincerely thank you for all helpful suggestions. We add statistical significant results and more detailed results of the unbiased data ratio in the attached PDF. We welcome any further technical advice or questions on this work and we will make our best to address your concerns. Pdf: /pdf/bae1c9ba763eb7cbfeb0ec7e3d8e800381233535.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper shows that existing approaches that are based on multi-task learning or take advantage of unbiased data have theoretical limitations in the problem of recommendation debiasing when hidden confounding is present. The paper proposes to address these limitations by a unified multi-task learning approach whose key idea is to remove hidden confounding by calibrating learned nominal propensities and nominal error imputations by a consistency loss on unbiased data. The paper conducts extensive experiments to confirm the effectiveness of the proposed approach on 3 widely used recommendation datasets. Strengths: S1: The idea of tackling hidden confounding by a multi-task learning approach using unbiased data to calibrate learned nominal propensities and nominal error imputations is novel to me. S2: The paper compares the proposed approach against a wide range of existing representative approaches on 3 benchmark recommendation datasets under commonly used evaluation metrics. S3: The paper does a great job in reviewing major research directions in the problem of recommendation debiasing and the references cover a full spectrum of related works. Weaknesses: W1: It is not fair to claim that the paper is the first to "perform theoretical analysis to reveal the possible failure of previous approaches" as Ding et al. has already done a majority of such theoretical analysis [7], which is cited by the paper. W2: The proofs for the theoretical results presented in the paper are a bit hand waved. Take Theorem 2 as an example. It is not unclear what is the space spanned by x and why the fact that \bar{p} will not degenerate a point in the space spanned by x leads to the existence of a positive constant \eta that satisfies the condition. [7] Addressing unmeasured confounder for recommendation with sensitivity analysis Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Q1: Do you have any explanations why the proposed approach underperforms RD-DR [7] in terms of R@50 on the KUAIREC dataset? Q2: Is N@K in Tables 1 and 3 NDCG@K? If so, it is important to clarify the meaning of N@5 (and R@5) in the main text as it is not common to abbreviate NDCG@K as N@5. [7] Addressing unmeasured confounder for recommendation with sensitivity analysis Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for the helpful suggestions. **Below, we hope to address your concerns and questions to improve the clarity and quality of our paper.** > **W1:** It is not fair to claim that the paper is the first to "perform theoretical analysis to reveal the possible failure of previous approaches" as Ding et al. has already done a majority of such theoretical analysis [7], which is cited by the paper. **Response to W1:** The reviewer might have misunderstood. We do not claim that we are the first to do this. In lines 7-13, we simply say "We **first** perform theoretical analysis ...... **Then**, we propose ....". : ) Please kindly note that we explicitly cite such theoretical analysis in [7] in line 101 and line 104. We fully agree that this part is not our theoretical contribution. Instead, this paper provides a novel proof to shown the biasedness of AutoDebias and LTD under unobserved confounding in line 122. Below, we kindly provide a detailed comparison table to illustrate the difference between [7] and our work for enhancing clarity of our work. |**Paper** | **[7] Addressing unmeasured confounder for recommendation with sensitivity analysis** | **Ours** | | :--- | :---- | :--- | |Dataset(s)| **Only** biased data| Biased data **with a few unbiased data**| |Method|**Sensitivity analysis,** which is a **Minimax** approach | Using **unbiased data** to **calibrate** learned nominal propensities and nominal error imputations | |Theoretical result |Only **mitigate the unobserved confounding**, see the last paragraph of section 3.2 in [7] :" **Instead of aiming to eliminate the unmeasured confounding thoroughly,** the proposed RD framework ..., which provides a flexible way to **mitigate the unmeasured confounding**"| **Entirely eliminate the unobserved confounding** with a few unbiased ratings| |Assumption| **Assumption** that "assume the nominal propensity score can be expressed as ..." in Section 3.2 in [7] |**No additional assumptions,** except the need to use unbiased data| |Unbiasedness|**No**| **Yes (with the help of the unbiased data)**| |Learning Approach|**Joint learning,** i.e., alternative update (See Alg. 1 in [7])|**Multi-task learning**| > **W2:** The proofs for the theoretical results presented in the paper are a bit hand waved. Take Theorem 2 as an example. It is not unclear what is the space spanned by x and why the fact that \bar{p} will not degenerate a point in the space spanned by x leads to the existence of a positive constant \eta that satisfies the condition. **Response to W2:** We apologize for the lack of clarity. **Next, we give a more detailed proof of Theorem 2**. In the setting of MNAR, $r_{u,i}$ has a non-zero effect on $o_{u,i}$, so $p_{u,i} \neq \bar p_{u,i}$ according to the definition of MNAR. Thus, for some $\epsilon>0$, there exist positive constants $\delta_\epsilon,N_1(\epsilon)>0$, such that for all $|\mathcal{D}|>N_1(\epsilon)$, $$ \mathbb{P}( | \bar p_{u,i} - p_{u,i} | \geq \epsilon )\geq \delta_\epsilon > 0. $$ Without loss of generality, if $\hat p_{u,i}$ is the learned propensity by fitting $o_{u,i}$ with $x_{u,i}$, then $\hat p_{u,i}$ essentially estimates $\mathbb{E}[o_{u,i}=1|x_{u,i}] =\mathbb{P}(o_{u,i}=1 | x_{u,i}) = p_{u,i}$. Then, there exists some $N_2(\epsilon)>0$, such that for all $|\mathcal{D}|>N_2(\epsilon)$, $$ \mathbb{P}( | \hat p_{u,i} - p_{u,i} | \geq \frac{ \epsilon}{2} )<\frac{\delta_\epsilon}{4}. $$ Thus, if $|\mathcal{D}|> \max \\{ N_1(\epsilon), N_2(\epsilon) \\}$, we have $$ \mathbb{P}( | \bar p_{u,i} - p_{u,i} | \geq \epsilon, | \hat p_{u,i} - p_{u,i} | < \frac{ \epsilon}{2} ) = \mathbb{P}( | \bar p_{u,i} - p_{u,i} | \geq \epsilon )-\mathbb{P}( | \bar p_{u,i} - p_{u,i} | \geq \epsilon, | \hat p_{u,i} - p_{u,i} | \geq \frac{ \epsilon}{2} ) \geq {\delta_\epsilon} -\frac{\delta_\epsilon}{4} =\frac{3}{4} \delta_\epsilon. $$ Let $\eta=\epsilon/2$. Since $\\{| \bar p_{u,i} - p_{u,i} |\geq \epsilon, | \hat p_{u,i} - p_{u,i} | < { \epsilon}/{2}\\} \subset \\{| \hat p_{u,i} - \bar p_{u,i} | \geq \eta\\}$, we have $$ \mathbb{P}( | \hat p_{u,i} - \bar p_{u,i} | \geq \eta )\geq \mathbb{P}( | \bar p_{u,i} - p_{u,i} |\geq \epsilon ,| \hat p_{u,i} - p_{u,i} | <\frac{ \epsilon}{2} )> \frac{3}{4}\delta_\epsilon. $$ Thus, $$ \lim_{|\mathcal{D}| \to\infty} \mathbb{P}( | \hat p_{u,i} - \bar p_{u,i} | \geq \eta) \geq\frac{3}{4}\delta_\epsilon>0. $$ Similarly, it can be shown that $$ \lim_{|\mathcal{D}| \to\infty} \mathbb{P}( | \hat \delta_{u,i} - \bar g_{u,i} | \geq \eta) >0. $$ > **Q1:** Do you have any explanations why the proposed approach underperforms RD-DR [7] in terms of R@50 on the KUAIREC dataset? **Response to Q1:** Thanks for your comments. We would like to emphasize that the **overall performance** of the proposed methods is the best among all the methods on all three datasets. In addition, we do carefully tune the parameters of all baseline models, so that some of them will have a competitive result. > **Q2:** Is N@K in Tables 1 and 3 NDCG@K? If so, it is important to clarify the meaning of N@5 (and R@5) in the main text as it is not common to abbreviate NDCG@K as N@5. **Response to Q2:** We thank the reviewer for pointing out this issue. Yes, in Tables 1 and 3, N@K means NDCG@K and R@K means Recall@K. We will clarify this in our revised manuscript. *** **We sincerely thank you for your feedback and will provide more clarifications and explanations in the revised version, and welcome any further technical advice or questions on this work and we will make our best to address your concerns.** --- Rebuttal Comment 1.1: Comment: Thanks for address my comments, which much improves the quality of the paper. I decided to raise my score to 6 --- Reply to Comment 1.1.1: Title: Thank you for your constructive comments and raising the score! Comment: We are glad to know that your concerns have been effectively addressed. We are very grateful for your constructive comments and questions, which helped improve the clarity and quality of our paper. Thanks again!
null
null
null
null
null
null
Block-Coordinate Methods and Restarting for Solving Extensive-Form Games
Accept (poster)
Summary: This work proposes a cyclic coordinate descent method to solve the two-player zero-sum extended form game (EFG) and derives the convergence. Strengths: To me, solving problems with non-separable constraints by coordinate-descent-type methods is novel and interesting. Therefore, I believe that this is indeed a contribution. Weaknesses: I'm not familiar with this area so please forgave me if I made some mistakes. 1. Possibly expensive computational cost per iteration. Following the literature survey in the manuscript, I read reference [1] which studies coordinate descent for solving optimization problems with non-separable non-smooth objective functions. I found that in both the algorithm in [1] and the algorithm in this paper, although only partial gradient is needed at each update, the proximal step with respect to the whole nonsmooth function is needed. More specifically, in Lines 8 and 11, the argmin step might be difficult and computationally expensive since the argmin step is over the whole X and Y and involve all the blocks. 2. This concern follows the first one but focuses on the experiments: the comparison with non-coordinate algorithms may be unfair. The current comparison is in terms of the number of full gradient computations. However, gradient computation is not the only computation cost, Lines 8 and 11 also cause computation costs. I would suggest a comparison in terms of wall clock time. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: no Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: I do not see any potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper. We provide responses to your concerns below: > Possibly expensive computational cost per iteration. Following the literature survey in the manuscript, I read reference [1] which studies coordinate descent for solving optimization problems with non-separable non-smooth objective functions. I found that in both the algorithm in [1] and the algorithm in this paper, although only partial gradient is needed at each update, the proximal step with respect to the whole nonsmooth function is needed. More specifically, in Lines 8 and 11, the argmin step might be difficult and computationally expensive since the argmin step is over the whole X and Y and involve all the blocks. We have addressed this in the top-level comment (5). While the argmin in Lines 8 and 11 of Algorithm 1 might make it seem like we are doing a proximal update with respect to the entire feasible set, in fact, we can do this computation without having to consider the entire feasible set (and this is the primary premise of our paper). Please also see the implementation version of the algorithm in Appendix D that makes this more transparent. As mentioned in the top-level comment (1-4), the primary contribution of our paper is to provide a coordinate-descent-like method which circumvents the issue of non-separability without having to compute a proximal update with respect to the entire feasible set every time a partial gradient is taken with respect to a block. We do this by combining the recursive prox update structure that dilated regularizers are known to have (Proposition 2.2) with an extrapolated cyclic method. We have an implementation-specific algorithm in Appendix D which demonstrates that the computation done for gradient updates and prox updates (which are the computational bottleneck when applying first-order methods to EFGs) for our method is comparable to the respective computations done for MP. In fact, after performing one whole iteration of the outer loop (line 2 in Alg 1), we will have spent almost exactly the same amount of time as a single prox computation + gradient computation when running Mirror Prox. However, we agree that the body could have more clearly explained this, and we will update the paper to reflect this. > This concern follows the first one but focuses on the experiments: the comparison with non-coordinate algorithms may be unfair. The current comparison is in terms of the number of full gradient computations. However, gradient computation is not the only computation cost, Lines 8 and 11 also cause computation costs. I would suggest a comparison in terms of wall clock time. We have addressed this in the top-level comment (5) and also earlier in our response. We demonstrate in Appendix D that Lines 8 and 11 (the prox updates for each of the players) require comparable computation to the prox updates done in MP. We will make this more explicit in the main body (as we note in the top-level comment). Note that the wall-clock times of our algorithm are comparable to MP per “full” gradient computation (as shown in Tables 1 and 2 in the attached pdf). --- Rebuttal Comment 1.1: Comment: Thank you for your response.
Summary: This paper combines the local prox update technique with the extrapolated cyclic algorithm and proposes the ECyclicPDA algorithm. While the local prox update technique is well understood, the authors reinterpret it as a coordinate method (CM). Then, the method is combined with a new extrapolated method. Theoretical analysis shows that the proposed algorithm can converge at a rate of O(1/T). Strengths: 1. This paper presents a new understanding of the local update rules of OMD with dilated DGF. The combination of local OMD updates and extrapolated updates is original and interesting. 2. The theoretical results seem sound. 3. The paper is well-written. Weaknesses: 1. It is hard to understand why we want to reinterpret the well-known local update rules of OMD with dilated DGF to CM. As we know, people already compute the strategy in a bottom-up fashion in previous work [1, 2, 3]. So, I think the main contribution is the combination of the local update method and the extrapolated cyclic method. 2. However, the ECyclicPDA algorithm seems less efficient than the traditional MP algorithm. For every infoset, an extra traversal of the subgame rooted at the infoset is needed to compute the “extrapolated” vector. This could be infeasible for large-scale games. 3. The experimental results show that the proposed ECyclicPDA performs better than MP. The results are not surprising when considering that ECyclicPDA can be very time-consuming. 4. The algorithm does not compare with related optimistic OMD [4] algorithms for EFGs. [1] Farina, Gabriele, Christian Kroer, and Tuomas Sandholm. "Optimistic regret minimization for extensive-form games via dilated distance-generating functions." Advances in neural information processing systems 32 (2019). [2] Farina, Gabriele, Christian Kroer, and Tuomas Sandholm. "Better regularization for sequential decision spaces: Fast convergence rates for Nash, correlated, and team equilibria." arXiv preprint arXiv:2105.12954 (2021). [3] Liu, Weiming, et al. "Equivalence analysis between counterfactual regret minimization and online mirror descent." International Conference on Machine Learning. PMLR, 2022. [4] Lee, Chung-Wei, Christian Kroer, and Haipeng Luo. "Last-iterate convergence in extensive-form games." Advances in Neural Information Processing Systems 34 (2021): 14293-14305. Technical Quality: 3 good Clarity: 3 good Questions for Authors: none Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have stated the limitations properly. No potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper. We provide responses to your concerns below: > It is hard to understand why we want to reinterpret the well-known local update rules of OMD with dilated DGF to CM. As we know, people already compute the strategy in a bottom-up fashion in previous work [1, 2, 3]. So, I think the main contribution is the combination of the local update method and the extrapolated cyclic method. We mention in our paper and in the top-level comment (3), that the recursive computation of the prox update when using dilated regularizers in EFGs is well-known. The main contribution of *our* paper is different though (as mentioned in top-level comment (4)): what we are showing is that the ``local update’’ structure of dilated DGFs enables us to perform extrapolated cyclic block-coordinate updates. This is completely new, and different from the purpose of the local updates in prior works. In prior works the local updates were merely a way to efficiently implement deterministic full-gradient updates. > However, the ECyclicPDA algorithm seems less efficient than the traditional MP algorithm. For every infoset, an extra traversal of the subgame rooted at the infoset is needed to compute the “extrapolated” vector. This could be infeasible for large-scale games We have discussed this issue in the top level comment (5) and there is an analysis of the per-iteration complexity provided in Appendix D, where we note that the per-iteration complexity is comparable to that of MP. The scaled values used for extrapolation can be computed while we are traversing the treeplex to do the partial prox update; note that because of the recursive nature of the prox update (as discussed in the top level comment (3,5)), this does not require any extra traversal through the treeplex as compared to a full-gradient method and thus neither does the computation of the “extrapolated vector”. We will make this more explicit in the main body in a revised version of the paper. > The experimental results show that the proposed ECyclicPDA performs better than MP. The results are not surprising when considering that ECyclicPDA can be very time-consuming. We have discussed this in our top level comment (5), and the attached pdf contains data on the runtime of our algorithm and the other algorithms we compare against. ECyclicPDA does not take more time than MP (in fact, it usually takes *less* time per outer iteration) and performs about half as many arithmetic operations per outer iteration (note that one outer iteration for ECyclicPDA corresponds to a full gradient update, and is thus is comparable to one iteration of MP). > The algorithm does not compare with related optimistic OMD [4] algorithms for EFGs. We will add this in a revised version of the paper; we were not able to obtain results for this in time for the rebuttal. We suspect that the performance will be comparable to MP, perhaps very slightly better, based on prior work. --- Rebuttal Comment 1.1: Comment: Thank you for your response. My confusion has been completely resolved, and therefore I will raise my rating.
Summary: This paper introduces ECyclicPDA, a new first order method for solving extensive form games. The main idea is to implement something in the spirit of coordinate descent where improving directions can be found by considering a part of the current iterate in isolation. Empirical results show it generally outperforms other first-order methods and approaches the performance of CFR+. Additionally, a heuristic to restart the process is introduced which improves the performance of both ECyclicPDA and CFR+ Strengths: ECyclicPDA is a nice contribution that takes the growing collection of FOMs for EFGs in a new direction. The analysis seems to involve non-trivial technical innovations. The empirical results are convining and show continued progress toward cloasing the gap with CFR-based approaches. The restarting heuristic is also very interesting, and the ability to get performance improvements with CFR+ is particularly nice (and intuitive in hindsight). Weaknesses: As far as I can tell, the details of how restarting is implemented are never clearly explained. The clearest explanation I can find is in the introduction on lines 88-91, but even taking that as the full specification, I don’t see where the tuning of the parameter about when to restart is specified. It would also be nice to have a bit of explanation / intuition for how restarting is benefiting various algorithms. For CFR+ I can see how resetting the regret sums and the averaging process could be beneficial. For FOMs I’m less clear why it is useful. Is it purely from resetting the averaging process? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please comment on the questions about resetting. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Adequate Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper. We are glad that you found our contributions interesting. We provide responses to your concerns below: > As far as I can tell, the details of how restarting is implemented are never clearly explained. The clearest explanation I can find is in the introduction on lines 88-91, but even taking that as the full specification, I don’t see where the tuning of the parameter about when to restart is specified. It would also be nice to have a bit of explanation / intuition for how restarting is benefiting various algorithms. For CFR+ I can see how resetting the regret sums and the averaging process could be beneficial. For FOMs I’m less clear why it is useful. Is it purely from resetting the averaging process? In our numerical experiments, we are restarting the algorithms every time the duality gap has been halved and initializing the next run at the output (averaged) iterate. We did not explore tuning parameters pertaining to when to restart. We will update this in a revision of our paper to make clear how restarting is implemented in our experiments. For FOMs, we believe that the primary benefit comes from resetting the averaging process, though it is somewhat unclear whether there exists a more intuitive or deeper explanation than this. --- Rebuttal Comment 1.1: Comment: Thank you for the response
Summary: The draft considers solving the extended-form game (EFG) with extrapolated block-coordinate descent methods, proving that it achieves O(1/T) convergence. The authors further show that with a restarting strategy, the proposed algorithm may be comparable sometimes to state-of-the-art algorithms like CFR+. In my understanding, the proposed algorithm is essentially a generalization of the CODER algorithm proposed by Song [37] in a way from coordinate descent (CD) to blockwise CD and specializes the domain to be linear functional and treeplex. I.e., the algorithm is essentially the same (line-by-line corresponded) as CODER but generalizes the scalar coordinate to blocks of "separable" variables. For the specialization, the authors focus on the context of EFG with bilinear score function and 2-player treeplex (linear probability simplex), where the prox is taken on L1-norm instead of the L2-norm by CODER. Implementation-wise, the proposed algorithm uses the recursive structure of the treeplex for efficient linear time computation. Strengths: The major contribution of the draft is that it provides an instance to efficiently implement the CODER algorithm on EFG with blockwise L1 proximal and shows that with restarting, such implementation may be comparable with the state-of-the-art algorithm like CFR+. It also analyzes the scenario and provides an O(1/T) convergence rate without dependence on the number of variables. To summarize, the draft shows that a simple CODER extrapolation works well when taking L1 specialization on EFG. Weaknesses: However, I would like to claim that the original CODER method may already cover the blockwise updates. I.e., if you look into the proof of CODER, it generally does not use the scalar property of the proximal and treats variables as blocks. They are using the $d$-dimension notation for simplicity and already claiming they are doing block CD in their abstract. In the EFG case, the d=2, so naturally (if the proximal works), the convergence complexity doesn't depend on the number of variables since blocks=2 is a constant. So the draft's theoretical result (independence on the number of variables) is not surprising. And the proposed method does not relax the "separability assumption" since the 2-player treeplex is separable as two blocks. The major difference/contribution here is that it performs the analysis on the L1 proximal and does the analysis. Further, although it's good to see that the simple method works with restarting, the restarting part doesn't have theoretical support. I.e., without restarting, the theoretically supported method is not comparable to the state-of-the-art. And with restarting, the "full" method is comparable but not really much better than CFR+. Thus these factors put the paper in a borderline condition. However, the analysis seems solid, and I would give a borderline acceptance. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. (Figure 2) Only doing the step-size optimization for your method and not the baseline is tricky for the experiment. For fairness, it should be either all constant step-size or all step-size tuned for the baseline. Otherwise, the reader cannot be sure whether the proposed method is better than MP. 2. (Figure 4) The vertical drop in the curves looks like either a numerical issue or a wrong optimal value. 3. (Algorithm 1) Adding parenthesis and denoting what you maintain in the computation may be good for clarity. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: Not applicable since the paper does not have a limitation section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > However, I would like to claim that the original CODER method may already cover the blockwise updates. [...] So the draft's theoretical result (independence on the number of variables) is not surprising. And the proposed method does not relax the "separability assumption" since the 2-player treeplex is separable as two blocks. The major difference/contribution here is that it performs the analysis on the L1 proximal and does the analysis. We would like to clarify a few points here that may have been missed by the reviewer. We will make sure to highlight them more in the revision. First, CODER is not directly applicable to our setting and it is not only because of ell_1 vs ell_2 settings. As discussed in the second paragraph of “Contributions” in the introduction and in the introductory part of Section 3, and in our top-level comment (1), our feasible region (treeplex) is not block separable (except for the trivial primal-dual separation mentioned in the review), hence it violates one of the main assumptions made in CODER. Note that block coordinate algorithms are mainly useful and applied with many small blocks (see, e.g., the block partition strategies described in our experimental section). Even ignoring this point and trying to apply CODER bottom up, there are at least two other issues. First, the extrapolation step in CODER would require exact (non-scaled) values of the blocks of $\mathbf{x}_k$ and $\mathbf{y}_k$ that got traversed up to the cycle iteration $t$, which are simply not available without a full tree traversal – these values are only available up to normalizing factors that can get computed only at the end of the bottom-up traversal, in the top-down (rescaling) traversal (Lines 13-16 in Algorithm 1). This fundamentally changes the analysis. Second, we do a block partition on both the primal and the dual side, thus the number of blocks is $2 \cdot m$ (not $2$, as there are $m$ blocks per player and the number of blocks can be as large as the number of infosets, which can be order-dimension). Even if CODER’s analysis was applicable here, its convergence bound would scale with $\sqrt{m}$. Our analysis does not incur such a dependence and this comes precisely from a careful interleaving argument carried out in the proof of Theorem 3.2 (please see Lemma C.1 and its proof in the appendix). > Further, although it's good to see that the simple method works with restarting, the restarting part doesn't have theoretical support. I.e., without restarting, the theoretically supported method is not comparable to the state-of-the-art. And with restarting, the "full" method is comparable but not really much better than CFR+. Thus these factors put the paper in a borderline condition. However, the analysis seems solid, and I would give a borderline acceptance. As we have discussed in our top level comment (6, 7), the practical performance of CFR+ and its variants is not well understood, and not within the scope of our paper (also note that the theoretical guarantees for CFR+ and its variants are worse than the guarantees that exist for MP as well as our method). Our primary contribution is theoretical, and it is surprising that a first-order method is able to compete with it. Furthermore, experimentally, it is a contribution of our paper to demonstrate that “restarting” works well even as a heuristic for our method, MP, and the CFR variants. To the best of our knowledge, this has not been discussed in the literature previously. > (Figure 2) Only doing the step-size optimization for your method and not the baseline is tricky for the experiment. For fairness, it should be either all constant step-size or all step-size tuned for the baseline. Otherwise, the reader cannot be sure whether the proposed method is better than MP. Step size optimization is done for MP (as noted in lines 293-294) as well as our method. > (Figure 4) The vertical drop in the curves looks like either a numerical issue or a wrong optimal value. The vertical drop in the curves is not a numerical issue since precision on the order of $10^{-15}$ can be reasonably computed and represented as a double. The algorithms appear to exhibit a linear rate-like convergence when they have that sudden drop-off, which is actually a feature of using the restarting heuristic; this demonstrates that our restarting heuristic works well. > (Algorithm 1) Adding parenthesis and denoting what you maintain in the computation may be good for clarity. As we mention in the top-level comment (5), we have an algorithmic specific implementation in Appendix D that makes clear what quantities are maintained in the computation. The algorithm presented in the main body is for convenience of presenting the main ideas without a too detailed exposition, and for theoretical analysis. --- Rebuttal Comment 1.1: Comment: > First, CODER is not directly applicable CODER is not directly applicable to every leaf node, but it is applicable with $d=2$ as I mentioned, that is, separating the variables into only 2 large blocks (two players). The proof in CODER does apply to variable blocks. And I understand your contribution on the L1 norm proof is different form CODER's L2 proof. Just highlighting the relationship of the L2 to L1 generalization, which also appears in other proximal papers. After reading the authors' rebuttal, I decided to keep my rating. --- Reply to Comment 1.1.1: Comment: We would like to thank you for continuing to engage in a discussion about our paper. Yes, CODER is applicable only in the trivial cases as we acknowledged in our response (see "except for the trivial primal-dual separation mentioned in the review" in the response above), but not in the general block coordinate case that we focus on in our paper. This is explained in detail in the above response and in the paper. In particular, it is inapplicable for three of the four block construction strategies (children, postorder, and infosets) that we discuss in Section 4. Note that what we are doing is more general than treating “the leaf nodes” as blocks. The fourth block construction strategy we consider consists of using a single block for each player and coincides with the trivial decomposition you mention, and thus is already covered by our experiments. Note that this latter trivial block construction strategy corresponds to the well-known scheme of *alternation* in a two-player zero-sum game. Alternation is already known to work e.g., in the context of self-play via regret minimization, as well as in e.g., the Chambolle & Pock primal-dual algorithm. Alternation is quite special, in that it leverages the primal-dual structure of a zero-sum game, and we do not think it is meaningfully a form of “block decomposition” in the spirit of what our paper is accomplishing. Note also that existing work does not describe this as a form of block decomposition.
Rebuttal 1: Rebuttal: We would like to thank all of the reviewers for taking the time to provide valuable feedback for our paper. Here we provide clarifications for a couple of points raised by multiple reviews. Please note that we have attached a PDF containing tables that we refer to in the rebuttal. ### Theoretical contributions: (1) The feasible region considered by our problem is non-separable. The feasible region for our problem consists of the strategy space for each of the two players. Players’ strategies are represented using sequences, and the space of these sequences can be formulated as a treeplex, which is essentially a Cartesian product of scaled treeplexes, the base case being that the treeplex is a simplex. (2) If we were to apply a randomized block coordinate method, we would not be able to update one (or a constant number of) those simplices, as (unless we are at a leaf) this would violate feasibility for the entire subtree rooted at the simplex we update. (3) We get around this issue by having a deterministic method that traverses the treeplex bottom-to-top, updating *scaled* values of the coordinates (scaled by the value of the parent node, so each update is on a probability simplex). We can do this thanks to decomposability of the prox update ([12, 22] and stated as Proposition 2.2 in our paper), because we are moving bottom to top, and because another top to bottom traversal fixes the scaling. (4) On a theoretical front, it is surprising that the result we get for such a deterministic, cyclic block coordinate method is independent of the number of blocks (which can be order-dimension) and, to our knowledge, is the first result of this kind for cyclic block coordinate methods. It is also the first block coordinate method whatsoever for EFGs. This would be interesting even if no practical improvements were shown in our experiments. ### Per-iteration complexity of ECyclicPDA: (5) We consistently beat MP, both in terms of oracle queries and in terms of wall-clock time (attached pdf). We compare to MP because it is a full vector update from the same class of (first-order) methods. We expect similar results for OMD and will include added comparison in a revised version. Several reviewers brought up the per-iteration complexity of our method, with an incorrect understanding that it is more expensive than e.g., the per-iteration cost of MP. The per-iteration complexity of our method is discussed in Appendix D (as is stated in Section 3). In Appendix D, we provide an implementation specific version of our algorithm and a computational complexity analysis of it. We demonstrate that the necessary computations for our method are comparable to those necessary for a full gradient method (i.e., MP). In particular, when first-order methods are applied to solving EFGs, the computational bottleneck are the gradient and proximal update computations (this is noted in lines 38 and 39 in our paper). We will update our paper to emphasize in the main body that the per-iteration complexity is comparable to MP. In Table 1, we provide a table demonstrating the per-iteration runtime in milliseconds of our algorithm (for each of the different block construction strategies we consider) as well as the algorithms we compare against. It is clear from this table that the runtimes are pretty similar to each other, and this is without extensive optimization of our particular implementation. Clearly, our algorithm is at least as fast as MP per “full” gradient computation. Since the computational bottleneck of gradient and prox computations becomes apparent in bigger games, Battleship demonstrates the speed of our algorithm relative to MP best. In Table 2, we provide a table demonstrating the wall clock time in seconds of our algorithm to reach a duality gap of $10^{-4}$ (with a timeout of 30 seconds). It is clear that our algorithm is competitive with MP: MP and its restarted variant time out on Leduc, and MP times out on Battleship (while our algorithm does not even take close to 30 seconds). Furthermore, we are outperforming CFR+ and its restarted variant in Battleship. ### Restarting as heuristic and comparison to CFR+: (6) The adaptive restarting heuristic for all methods considered is a contribution of our work since restarting has not been applied to EFG solving previously. We will update the paper to emphasize this by using “r” as a prefix on all algorithms when the restarting heuristic is applied. (7) We would like to note that CFR+ and its variants have a slower convergence rate in the worst case and their good practical performance is still not well understood [11, 12, 23, 24, a]; the strong practical performance of CFR+ and variations is a long-standing open problem in the field and explaining this strong practical performance is outside the scope of the paper. Furthermore, there is recent work [b] which provides evidence that there exist games where the worst-case convergence rates are realized. Nonetheless, we reiterate that our paper is the first paper to *ever* give a first-order method that outperforms CFR+ on EFGs beyond Kuhn poker (which is almost a normal-form game). Secondly, we would like to emphasize that in Fig. 2, the “PCFR+” algorithm is a *new* algorithm, as it is restarted, and restarted PCFR+ had never been considered before; as stated above, we will update the plots to say “rPCFR+” in order to emphasize that it is different from regular PCFR+ (and will update the other restarted versions of other algorithms similarly). Finally, we view our most important contribution as being theoretical in showing the possibility of block-coordinate approaches, as explained above. [a] Neil Burch. Time and Space: Why Imperfect Information Games are Hard. PhD thesis, University of Alberta, 2017. [b] Gabriele Farina, Julien Grand-Clément, Christian Kroer, Chung-Wei Lee, and Haipeng Luo. Regret Matching+: (In)Stability and Fast Convergence in Games. arXiv preprint arXiv: 2305.14709, 2023. Pdf: /pdf/8d8001115f4e888e549c3fd24f4d05dc30696355.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper introduces a novel method for solving Extensive-Form Games based on a block-coordinate approach. The authors motivate and explain their idea and experimentally evaluate its performance in terms of primal-dual gap in four different games. The method features a favorable theoretical convergence rate but the empirical results are often comparable or worse than existing methods in terms of the duality gap. Strengths: - The paper is well written and free of grammar and stylistic issues. - The introduction to the problem is very approachable even to domain non-experts. - Experimental validation of the block construction strategy choice. - Favorable converge properties. Weaknesses: - It seems that in empirical studies the method is consistently worse than PCFR+ (Fig 2 and 4). I have a difficulty to find a complete justification for this in the paper. - Only a theoretical convergence rate and not an actual computation cost measured in units of time is reported. - The plots lack error bars (unless no variance is possible to obtain). - The contributions are explained indirectly. They could be listed in a more compact and more explicit form (e.g., a numbered list) to allow for critical assessment. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How does the run-time of the tested methods compare in practice? *** Rebuttal Acknowledgment *** I have read the author's rebuttal. The answer covers my questions well and the explanation the authors provides logical justification of the potential shortcomings. Therefore, I am of the opinion that the paper is sound and I increased my rating accordingly. I do not go higher due to my very limited familiarity with the subfield. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: - The discussion of limitation is not very thorough and it is mostly limited to a mention of a lack of understanding of the game type effect on the performance. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper. We provide responses to your concerns below: > It seems that in empirical studies the method is consistently worse than PCFR$^+$ (Fig 2 and 4). I have a difficulty to find a complete justification for this in the paper. As we have discussed in our top level comment (6, 7), the practical performance of CFR$^+$ and its variants is not well understood, and not within the scope of our paper. Our primary contribution is theoretical, and it is surprising that a first-order method is able to compete with it. Moreover, as stated in the overall comment, "PCFR$^+$" in Fig 4 is actually restarted PCFR$^+$, a new method introduced by us, and we have made this clear in a revision of our paper (including using "rPCFR$^+$" to denote this adaptive restarting variant that we introduce). > Only a theoretical convergence rate and not an actual computation cost measured in units of time is reported. We discuss this in our top-level comment (5). There is a runtime analysis that demonstrates that our per-iteration complexity is comparable to that of Mirror Prox, and we provide tables reporting results in units of time. > The plots lack error bars (unless no variance is possible to obtain). Our method and all methods compared against are deterministic so there are no relevant error bars to report. > How does the run-time of the tested methods compare in practice? We have provided wall-clock comparisons in the top-level comment (5). > The discussion of limitation is not very thorough and it is mostly limited to a mention of a lack of understanding of the game type effect on the performance. We appreciate the suggestion. In a revised version, we will create an explicit limitations section and add discussion of comparisons to CFR$^+$ variants, as well as note that our algorithm performs better on larger games. --- Rebuttal Comment 1.1: Comment: Thank you for the response.
Summary: This paper develops a cyclic block-coordinate-descent-like method for two-player zero-sum extensive-form games (EFG). Such methods for EFG are difficult due to non-separable nature of block structure of the problem. The decision problem for a player in a EFG can be formulated using Treeplex, for which regularizing functions can be constructed through the framework of dilated regularizers. These dilated regularizing functions allow recursive prox computations. This paper utilizes this frame work to develop an extrapolated cyclic algorithm to perform pseudo-block updates. They demonstrate O(1/T) convergence rate for two-player zero-sum Nash equilibrium using this method, and provide a specific algorithmic implementation which shows that runtime of the proposed method is independent of the number of blocks. Experimental evaluation on EFG benchmark games is performed using three different dilated regularizers -dilated entropy, dilated global entropy, and dilated $\ell_2$ with different block construction strategies. Experiments demonstrate the benefit of using blocks over non-block based approach in some games, and no significant diffference between different block construction methods in others. The results show improved performance over the state of the art first order method (Mirror-Prox). Further, they introduce a restarting heuristic which speeds up the proposed method as well as the baselines. Strengths: The paper is generally well written, describing necessary background to help readers understand the paper. The proposed algorithm seems novel, though I must admit I am not familiar with the literature. The experimental evaluation seems convincing, the proposed method outperforms SOTA FOM Weaknesses: line 105-``As discussed before RCMs are not applicable to our setting''. It is not discussed anywhere in the paper why randomized coordinate methods are not applicable. The paper could include a discussion in the related work about [i] which develops primal-dual coordinate methods for solving bilinear saddle-point problems. [i] Carmon et al. Coordinate Methods for Matrix Games Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors discuss open question of why restarting work with regularizers other than $\ell_2$. The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper. We provide responses to your concerns below: > line 105-``As discussed before RCMs are not applicable to our setting''. It is not discussed anywhere in the paper why randomized coordinate methods are not applicable. We address this in our top-level comment (1, 2). In line 53 of our paper, we mention that in EFGs we do not have the separable structure typically required to use coordinate descent methods. If one is to construct blocks of sequences (which correspond to coordinates in the EFG setting), and sample randomly, there is no guarantee that we can take a gradient step and be at a feasible strategy without projecting back onto the entire sequence-form polytope/treeplex (but this would be too computationally expensive since it would cause the number of (equivalent to) total proximal updates to scale linearly with the number of blocks). This prevents random sampling of blocks, and requires careful construction and cyclic traversal of the blocks, to ensure that we can generate feasible iterates that will converge ergodically to a Nash equilibrium without incurring a dependence on the number of blocks. We have made the connection between RCMs not being applicable and lack of separable structure more explicit in a revision of our paper. > The paper could include a discussion in the related work about [i] which develops primal-dual coordinate methods for solving bilinear saddle-point problems We appreciate the suggestion and will add this discussion in a revised version of the paper. Briefly: the only setting studied in that paper that is relevant to our work is their $\ell_1-\ell_1$ setting. Their assumption in that case is that the feasible set for each of the players is a probability simplex, which is a much simpler feasible set than the treeplex considered in our work. Importantly, it is unclear how to generalize their result to our setting, as it crucially depends on the simplex structure (see, for example, Eqs. (2) and (5) and the discussion of the data structure design on page 7 in the arXiv version of the cited paper). --- Rebuttal Comment 1.1: Comment: Thank you for your response. I read all the reviews and the rebuttal provided by the authors. I stick to my original rating of "Accept".
Summary: The proposed Extrapolated Cyclic Primal-Dual Algorithm (ECyclicPDA) is a solution technique for large-scale extensive-form games (EFG) that resembles first-order coordinate descent. To enable pseudo block-wise updates, it takes use of the recursive nature of the proximal update caused by dilated regularizers. A restarting heuristic for EFG solution is also presented by the authors, and it has the potential to significantly speed up both their cyclic technique as well as other current methods like mirror prox and (predictive) CFR+. Strengths: In summary, the ECyclicPDA algorithm introduces a novel and effective solution technique for tackling large-scale sequential games, demonstrating superior performance compared to current state-of-the-art techniques in some scenarios. The paper examines prior research on coordinate descent methods and first-order techniques for solving EFGs, highlighting the originality and benefits of their proposed ECyclicPDA approach. Weaknesses: The execution times and computational demands of the ECyclicPDA method in comparison to other cutting-edge approaches are not thoroughly analyzed in the paper. It is essential to evaluate such a comparison in order to ascertain the proposed algorithm's applicability and practical effectiveness. Although it is mentioned that the runtime of ECyclicPDA is independent of the number of blocks, a comprehensive evaluation is still lacking. Additionally, the paper overlooks competitive alternatives like DCFR, despite making references to these methods. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1) The authors claim, "For the first time, a first-order method has surpassed CFR+ in performance on non-trivial EFGs." However, it seems that in the paper titled "Equivalence Analysis between Counterfactual Regret Minimization and Online Mirror Descent" by Liu et al., the authors also demonstrate instances where first-order OMD methods can outperform CFR+. 2) How strong is the assumption that the strongly convex functions are nice, see page 5, paragraph "Dilated Regularizers"? 3) In the appendix, the last three figures share the same caption. Do these plots refer to linear, quadratic, and uniform averaging? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: During the experiments, ECyclicPDA demonstrated superior performance compared to CFR+ solely in the Battleship game. This stands as the only instance where the results were competitive with PCFR+, which still remains the overall best-performing strategy. However, it is not entirely clear from the overall text why ECyclicPDA outperformed CFR+ specifically in the Battleship game. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper. We provide responses to your concerns below: > The execution times [...] comprehensive evaluation is still lacking. We address this in our top-level comment (5), the pdf attached to it (containing numerical evidence that ECyclicPDA scales independently of the number of blocks), and Appendix D. > Additionally, the paper overlooks competitive alternatives [...] Predictive CFR$^+$, referred to in our paper as PCFR$^+$, is state-of-the-art among CFR variants for games outside of poker. It is true that specifically in poker-like games, DCFR seems to be better than PCFR$^+$. These conclusions can be found in [14]. We can add DCFR as well if you insist, but it will do worse than regular CFR+ in the non-poker games, and it will not change the takeaways from our experiments and mostly just clutter the plots. > The authors claim, "For the first time, a first-order method has surpassed CFR+ in performance on non-trivial EFGs." [...] While we were already aware of that paper, we do thank the reviewer for bringing it to our attention. We would like to argue that the algorithms in Liu et al. are not natural instances of first-order methods, or of OMD/FTRL, for several reasons: * The algorithms are what Liu et al. call *future-dependent* OMD/FTRL. “Future-dependent” refers to a very particular design of the regularizer: it has the usual FTRL/OMD regularization of the strategy, but also an additional regularization term that depends on the future strategy. This future strategy has to do with the updates being made in the tree below a given decision point, and is explicitly there to make the algorithm look like CFR/CFR+. Yet, the authors do not show that this odd modification of regularization maintains 1-strong-convexity, a prerequisite for applying OMD/FTRL regret bounds. * We do not think it is clear that the proofs in Liu et al are correct. Specifically, their proof of convergence of their future-dependent FTRL/OMD algorithms rely directly on convergence results in [a]. Looking at the proof of Theorem 3.5 in Liu et al, they seem to have at least two major issues: * They invoke Lemma 2 and Theorem 3 of [a] to give a regret bound that they start their derivation from (in equation (59)). But this is not the bound implied by [a], the bounds implied by [a] are given in equations (A.1) and (A.2) in [a], and they involve several additional terms that must be handled. * Liu et al. state that “Assumptions 1,2,3,5, and 8 in [a] have already been fulfilled.” Assumption 8 in [a] is the assumption that the regularizer is 1-strongly convex with respect to some norm. However, the words “strongly convex” or “strong convexity” do not even appear anywhere in Liu et al., and it is not clear that their time-varying sequence of regularizers satisfies 1-strong-convexity. Our best guess is that it does not, since usually with dilated DGFs the weights need to be chosen carefully in order to secure 1-strong-convexity. * The dilated L2 DGF, which Liu et al. use for their equivalence, is not known to be directionally differentiable, as is required by [a]. Specifically, if an action in an internal decision point is set to zero, then it causes problems with the dilation operation for child decision points. It is unclear what this does to their result. This is a general issue with the dilated L2 DGF, and it causes problems when using it in OMD, because it may require taking the gradient at the relative boundary, which is not well-defined. We would also like to note that of the three games that overlap between our experiments and theirs, Liar’s Dice, Battleship, and Goofspiel 4 ranks, examining their graphs and ours seems to indicate that we would outperform their algorithms on Battleship and their algorithms would beat us on Goofspiel 4 ranks. We cannot replicate their results for CFR+ for Liar’s Dice (this is evident from the fact that CFR+ reaches a duality gap of $10^{-4}$ before 1000 gradient computations in our experiments, whereas it is not clear if their CFR+ curve reaches an approximate duality gap of $10^{-4}$ even at 4000 gradient computations in their plots). It is not clear why CFR+ would perform so poorly in their experiments. We are happy to update our paper to discuss the Liu et al. paper. But we prefer not to do so unless the discussion phase for our paper helps us understand whether the Liu et al. paper is correct. [a] Joulani, Pooria, András György, and Csaba Szepesvári. "A modular analysis of adaptive (non-) convex optimization: Optimism, composite objectives, variance reduction, and variational bounds." Theoretical Computer Science 808 (2020): 108-138. > How strong is the assumption that the strongly convex functions are nice [...] The assumption that the strongly convex regularizers are nice is not a strong assumption. This is necessary to develop scalable first-order methods (FOMs) for EFG solving, and is considered in existing FOMs in the EFG literature [13,22,24]. There are several regularizers that are known to be “nice” that lend themselves to use in FOMs for EFG solving. These include the dilated $\ell_2$ regularizer, dilatable global entropy regularizer, and dilated entropy regularizer, which are the three regularizers that we use in our experiments. See [13, 22, 24] which introduce/study these regularizers. > In the appendix, the last three figures share the same caption [...] Thank you for pointing this typo out. Yes, figs. 27, 28, 29 depict uniform, linear, and quadratic averaging respectively. We will fix this in a revision. > During the experiments, ECyclicPDA demonstrated superior performance [...] We note in our conclusion that we do not know why ECyclicPDA performs better in certain games. Please see the top level comment (6,7) regarding the performance of CFR+. We believe that our introduction of restarting is significant even in the context of PCFR+, given the impressive speedup obtained for some games.
null
null
On the Exploitability of Instruction Tuning
Accept (poster)
Summary: The authors propose AutoPoison, an automated data poisoning pipeline. They demonstrate two types of attacks: content injection (e.g, brand names), and over-refusal attacks. The authors demonstrate how AutoPoison can change a model’s behavior by poisoning only a small fraction of data while maintaining a high level of stealthiness in the poisoned examples. Strengths: 1) The paper is well-written and easy to follow. 2) The idea is simple and intuitive, yet new at least in terms of the proposed attack pipeline. Weaknesses: 1) I'm mainly missing comparisons on more recent models, such as MPT, Falcon, LLaMA, etc. 2) the evaluation protocol for the proposed over-refusal attack should be much broader in order to extract meaningful insights from it. 3) Can the authors succeed on open source oracles? without using OpenAI's GPT? I now that some of the questions were mentioned in the limitations section, but in my opinion they are critical question that needs to be addressed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the Weaknesses section. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: authors addressed the limitations and, if applicable, potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Evaluation of more recent models. Thank you for the suggestion. We are working on extending the evaluation to more recent models. Poisoning experiments often require training a model many times on a range of poison ratios. The cost of using newer models is high since recent models like Llama do not have variants as small as OPT-1.3B. We are investigating parameter-efficient training strategies to fine-tune larger models. We may need more bandwidth to complete the experiments before the discussion period ends, but we are hoping to get a compute allocation large enough to include them in this paper's next version. > More comprehensive evaluation protocols. Thank you for this suggestion as well. As discussed in section 4.3, evaluating the over-refusal attack is not a straightforward problem. For this reason, we have broadened our evaluation protocol by using an LLM as the judge (compared to conventional metrics or rule-based evaluations). Nonetheless, given the tricky nature of this problem, we agree that there are limitations in our current protocol. Based on the attack goal of our threat model, our evaluation focuses on two specific aspects to check if the refusal message is valid (for effectiveness) and provides reasons (for stealthiness). Depending on the specific use cases, one might add additional dimensions to their evaluation. > AutoPoison with open-source oracles. This is an interesting point. Thank you for the thoughtful suggestion. We have now conducted additional experiments to verify the effectiveness of AutoPoison with open-source oracles. In Figure A.1, we use a small (in comparison with GPT-3.5-turbo) open-source model, Llama2-chat-13B, as the oracle model and generate a batch of poisoned data with the content injection attack. We denote this variant of AutoPoison as `AutoPoison/Llama2-chat-13B` in Figure A.1. We observe a similar advantage of `AutoPoison/Llama2-chat-13B` as `AutoPoison/GPT-3.5-turbo` over the handcrafted baseline. This experiment verifies that a small open-source oracle can also effectively achieve the adversary's goal, which further demonstrates the flexibility of the proposed method. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I appreciate the evaluation of AutoPoison with an open-source oracle, but without the evaluation of more recent models, I will keep my initial positive score. Wish you good luck with the final decision!
Summary: This paper analyzes the threat model for poisoning the data for instruction tuning to steer a language model towards certain behaviors. It proposes a data poisoning pipeline AutoPoison, that uses an oracle LLM to generate adversarial clean labels from perturbed adversarial instructions. AutoPoison is evaluated on two instances - content injection and over refusal - and outperforms the hand-craft baseline in terms of effectiveness and stealthiness. Strengths: 1. Creating a threat model for open-ended text generation and proposing a pipeline for it are novel and necessary in the era of LLMs. Focusing on instruction tuning rather than pre-training is reasonable due to the significantly smaller amount of data required for instruction tuning compared to model pretraining. 2. The analysis of model sizes and data poison ratios is thorough and provides people with a clear idea of how exploitable differently-sized models are under various settings. 3. Although the primary experiments utilize simplistic prompts for two rather basic settings of data poisoning, Section 5 demonstrates the extensibility and potential for the pipeline to be adapted for more specific and fine-grained use cases. Weaknesses: My concerns about the paper is mostly about the evaluation. ### 1. The evaluation on stealthiness is limited. The paper's argument for stealthiness is that if the generation quality of a poisoned model is similar to that of a clean model, the poisoning pipeline is stealthy enough. This argument is reasonable to me. However, **I'm not sure perplexity and coherence score is good enough to measure the text quality** under the instruction tuning setting. While they can certainly measure the surface-level fluency of the language, the quality of an instruction-tuned model is more about whether it can faithfully follow the instruction. I think LLM-based metrics might be better in evaluating instruction following ability (such as what they did in Vicuna). I would be more convinced if you used human evaluation or LLM-based evaluation to justify the text quality. ### 2. Other aspects of the model should be considered, for example faithfulness and factuality. It seems to me that the proposed content injection attack may lead to more hallucinations that contain factual errors. For example, in the third example in Figure 3, the model lists McDonald's as a Swedish company, which is not true. I think these factual errors and other issues caused by hallucination cannot be measured by perplexity and coherence, but they are important for the model's performance. Therefore it may be better to consider a more comprehensive evaluation of the ability of the model. ### 3. Lack of human evaluation While the examples given look coherent and follow the instruction in a correct way. I can still tell the attacked model from the clean model by spotting the fact that many outputs contain "McDonald's" in a weird way. Although injecting this phrase is the goal of the attack, making it easy for humans to tell can easily leak the adversary's intention. It would be better if you conduct some sort of human evaluation to check if an ordinary user without prior knowledge of the attack can tell an attacked model from a clean one. Another concern is the limited discussion about defense strategies. I think it would be better to have some preliminary discussion on how the proposed pipeline might be defended. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. What are some practical use cases for the attack? 2. Do you think the attack can be stealthy enough so that real humans cannot tell? 3. Considering the fact that instruction datasets are much smaller and often annotated by human beings, do you think the threat model is reasonable? How can an adversary attack the annotation process? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors have addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > LLM-based evaluation. Thank you for the constructive feedback. We agree that conventional text quality metrics can be limited in evaluating instruction-tuned models. We have adopted your suggestion and conducted LLM-based evaluations on a recent benchmark developed by the Vicuna team: MT-Bench [1]. In Table A.3, we evaluate the poisoned models on MT-Bench. Compared to the clean model, we observe no significant change in the LLM-rated scores among the poisoned ones. In Table A.4, we use the same LLM judges to rate the poisoned MT-Bench data generated by the oracle model. We find the content injection attack to have minimal influence on the score, while the over-refusal attack affects the score more prominently. However, note that these poisoned samples will be mixed into a much larger set of clean samples, and the standard deviation suggests that the score varies across clean samples. We think the attack remains stealthy under the LLM-based evaluation. [1]. Zheng, Lianmin, et al. "Judging LLM-as-a-judge with MT-Bench and Chatbot Arena." arXiv 2023. > More comprehensive evaluations. Thank you for the suggestion. We have evaluated the model's factuality on the TruthfulQA benchmark. Results in Table A.1 show little performance degradation on attacked models. The differences in MC1 and MC2 are all within one standard deviation. The results suggest that the proposed attack does not introduce more factual errors to the clean baseline model. > Human evaluation. Thank you for this suggestion as well. We agree that a human evaluation would be an even more comprehensive analysis of the proposed method. We are working on the IRB review to experiment on human subjects and will include the results in the next version of this paper. > Practical use cases. We mentioned some potential adversarial use cases when introducing the two example attacks in Section 3.2. For example, the content injection attack can be used to promote an adversary's affiliations; over-refusal attacks can be adopted by an adversary to compromise a target model (*e.g.*, of their opponents). In addition, there are situations where model owners could deliberately employ the proposed methods, for example, to inject target advertisements into their models (like how apps and websites nowadays have ads for profit). Given the flexibility of the poisoning pipeline, we believe it can generate a diverse set of poisoned responses for various potential use cases. We have added these points to our discussion. > How can an adversary attack the annotation process? While we think this is always a potential, even for closed-source models, which often rely on outsourced data collection, this issue is immediately noticeable for open-source projects. Open-source projects that collect crowd-sourced data, for example, Open-Assist and ShareGPT, are directly at risk, as an adversary could directly participate in the annotation processing by contributing to such projects. > Discussion on defense strategies. As briefly mentioned in the abstract and introduction, we hope this work raises awareness of the importance of data quality. We believe a straightforward defense strategy is to improve the data cleaning processing during data collection. For example, implement novel, comprehensive evaluations to filter out compromised samples. We are aware that most data collection processes have quality control, including the aforementioned open-source projects. However, this work reveals a new type of data poison that is hard to detect using conventional and LLM-based metrics. We want to thank you again for your thoughtful feedback. We hope our additional LLM-based evaluation addresses your comments on stealthiness and our experiments on TruthfulQA answered your question regarding the factuality of attacked models. We would appreciate it if you would consider raising your score in light of our response. We would also appreciate the opportunity to engage further if you have any other questions. --- Rebuttal Comment 1.1: Comment: Thank you for your response! I find it convincing and am raising my score to 6.
Summary: This paper proposes AutoPoison, an approach that automatically constructs poisoning data for instruction tuning. AutoPoison replaces training responses with poisoned responses obtained by querying an oracle LM with poisoned instructions. AutoPoison is evaluated on two tasks, content injection and over-refusal attacks. The experimental results show that AutoPoison achieves effective attack while maintaining overall text quality. Strengths: The paper has the following strengths: 1. The paper is well-written and easy-to-follow. It includes sufficient examples helpful for understanding. 2. The approach is simple (in a good way) and can be generalizable. 3. The evaluation is thorough and clearly demonstrates AutoPoison’s effectiveness. Weaknesses: ### The capabilities of instruction-tuned LMs A key question that the paper did not answer is if the data poisoning significantly deteriorates the LM’s capabilities. I am aware that text generated by the poisoned model has low perplexity and is coherent with the instruction. But these two metrics do not fully represent the LM’s capabilities. I would suggest evaluating the LM’s capabilities on any standard benchmark, such as HELM and MMLU. If the poisoning does not significantly deteriorate the capabilities, that strengthens the paper’s contribution. Otherwise, the attack is not stealthy, because once the users find out that the LM cannot do some task, they will stop using the LM. ### Other small issues 1. Which oracle model did you use in your experiments? Line 136 only says that the oracle model can be GPT-3.5-turbo. 2. At Line 195, why did you use greedy decoding? Typically text is sampled from LMs, e.g., ChatGPT. 3. For the third example in Table 4, McDonald’s does not have a bold font. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please consider addressing the points raised in the “Weakness” section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I believe that the paper provides a sufficient discussion of limitations and potential negative impact. Flag For Ethics Review: ['Ethics review needed: Inappropriate Potential Applications & Impact (e.g., human rights concerns)'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Evaluation of LM's ability on more comprehensive benchmarks. Thank you for the constructive feedback. We agree that it is important to maintain the model's ability on general tasks; otherwise, users will stop using it. Therefore, we adopt your suggestion and conduct additional evaluations on the MMLU benchmark. We report the results on MMLU in Table A.2. By looking at the averaged accuracy over 57 tasks. We observe no significant performance deterioration in attacked models compared to the clean model. By inspecting the performance on each subtask of MMLU, we find two tasks on which one of the poisoned models (over-refusal attack with AutoPoison) has slightly decreased accuracy. (Due to limited time and resources, we chose an objective setting by evaluating the mid-sized models (OPT-1.3B) with the strongest attack (*i.e.*, with the highest poison ratio). We will include the full results in the next version of this paper.) #### Small issues > Oracle models. Sorry about the confusion. Yes, we use GPT-3.5-turbo as the oracle model for AutoPoison. We have revised the manuscript for clarification. Thank you for catching this. > Decoding strategy. Thank you for noticing this detail. We use greedy decoding because it is the decoding strategy adopted by the pre-trained OPT models, as mentioned in [2], and generally accepted as most appropriate for this model family. [2]. Zhang, Susan, et al. "Opt: Open pre-trained transformer language models." arXiv 2022. >Format in Table 4. Thank you for catching this as well. We have updated the font accordingly. Thank you again for the thoughtful feedback. We hope our additional experiments on MMLU and other benchmarks have answered your question regarding the capabilities of instruction-tuned LMs. We would appreciate it if you would consider raising your score in light of our response. We would also appreciate the opportunity to engage further if you have other questions. --- Rebuttal Comment 1.1: Comment: I have read other reviews and the author rebuttals. I would like to thank the authors for providing the new experiment results, which are convincing. Therefore, I raise my rating from 6 to 7.
Summary: This paper proposed AutoPoison, an automated data poisoning pipeline to showcase two example attacks: content injection and over-refusal attacks over the instruction-tuned models. Overall, this is a nice paper, I appreciate the authors for tackling this research problem. Authors proposed a sound methodology to model two types of attack. My main criticism is at evaluating the attack success, mainly at the metrics that have chosen to report. I would suggest authors to revisit this section to ground this work. Strengths: * Timely work that shows eliciting exploitable behaviors from downstream models by injecting poisoned instructions. * Demonstrate two example attacks with different target behaviors: content injection and over-refusal attacks * Show that the success rates of the content injection attacks correlate with the scale of the LLM. It actually signifies with the machine-generated data. > “Intriguingly, we find that larger models, empowered with stronger language modeling and generalization ability, are more susceptible to content injection” * Nice metric to evaluate the over-refusal attacks via two -staged informative refusal with responses and reasons. Weaknesses: * Author used machine-generated data for instruction tuning. Given that they use GPT-3.5-turbo as the oracle model, the poisoned responses may also seem to be comparable in fluency. Thus, I don’t think perplexity is a right metric to evaluate the attack's stealthiness. > “For instruction tuning, we use the English split of GPT-4-LLM, an open-source dataset of machine-generated instruction-following data.” * It seems like the performance advantage of the hand-crafted baseline might be due to the coherence score calculation. Need better clarification here. * This is strange. Need explanations. > “In addition, we observe that under the over-refusal attack, OPT-1.3B, the middle-sized model, learns this behavior the fastest.” Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: * Threat model considers the responses to be semantically meaningful and to achieve a qualitative change in model behavior. Are they not two competing objectives? * Is the over-refusal attack a specialized version of the content injection attack? > “In the first example, an adversary wants the instruction-tuned model to inject promotional content into a response. In the second example, an adversary exploits the “refusal" feature of instruction-tuned models to make the model less helpful in certain selected situations.” * What kind of biases inherent to oracle models when generating poison instructions? * Does this assume that the fluent responses will make it easier in the LLM finetuning process? What about over-fitting the LLM to such examples? > “Because r_adv is crafted by a language model and not a human, this automated response will already have low entropy according to the language model, making it easy to elevate the likelihood of this response during fine-tuning without a severe change in behavior.” * Authors argue that poisoned responses are hard to detect manually. Did authors perform any qualitative experiments to support this claim? > “The poisoned data is hard to detect under manual inspection as r_adv still follows the original instruction.” * Is the coherence score calculated between the generated response and the gold standard response? I am confused with the following statement. > “We measure the coherence between the instruction and the model’s response in our setting.” Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Authors adequately addressed the limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > More metrics to evaluate the attack's stealthiness. We agree that perplexity is a limited metric to evaluate the attack's stealthiness. We define stealthiness mainly via text quality, and perplexity is a commonly applicable metric of text quality [1]. We agree that machine-generated texts tend to be lower in perplexity, yet we regard this as a strength of AutoPoison compared to handcrafted poisons, as it makes the poisoned samples less likely to be tossed out by conventional text quality filtering, which commonly relies on perplexity [3]. Inspired by your comment, we designed a new LLM-based evaluation to measure stealthiness. (See global response and Table A.3). Compared with the clean model, we observe no significant performance change among the poisoned models. It shows that a stronger LLM-based evaluation cannot distinguish between clean and poisoned models, further validating the proposed attack's stealthiness. [1]. Li et al. "Contrastive decoding: Open-ended text generation as optimization." ACL 2023. [2]. Zheng et al. "Judging LLM-as-a-judge with MT-Bench and Chatbot Arena." arXiv 2023. [3]. Wenzek et al. "CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data" LREC 2020. > Details about the coherence score and why it is higher in the handcrafted baselines. We would first like to clarify the details of our coherence score calculation. We follow previous conventions [4] by calculating the cosine similarity between the sentence embeddings of the prefix (i.e., instruction) and the generated texts. Returning to the question, we find handcrafted poisoned data (in the content injection case) to have a better coherence score. This happens because the handcrafted content injection attack works through minor edits of the golden response. Therefore the changes are minimal, which results in little change in the coherence score. Despite the high coherence score, handcrafted attacks are less effective in achieving the adversary's goals (Figure. 2). [4]. Gao et al. "SimCSE: Simple Contrastive Learning of Sentence Embeddings" EMNLP 2021. > OPT-1.3B on the over-refusal attack. Thank you for bringing up this question. We verified the results and confirmed our observation. Based on the results, we conjecture that an "optimal point" exists in model sizes for the behavior to be learned for complicated target behavior like over-refusal. The mid-sized model can comprehend and learn the target behavior more effectively than smaller models, and compared to larger models, it may also be easier to be overridden by the learned behavior. We think it is an interesting future direction to further investigate this phenomenon. > How does the threat model make qualitative changes in model behavior while being semantically meaningful? Since instruction-tuned models are often applied to open-ended questions, many possible answers are all equally semantically meaningful. By making qualitative change, we intend to steer the model toward certain kinds of answers that fit the attack objective. We agree that "qualitative change" may not be an accurate description of the attack objective. We have revised this phrasing in our updated manuscript. > Is the over-refusal attack a specialized version of the content injection attack? Although both poison attacks can be realized using the proposed pipeline, we do not deem one to be a specialized version of the other because their goals differ. They exemplify two possible exploitation cases corresponding to two types of data poisoning attacks: model availability attacks (with over-refusal) and model integrity attacks (with content injection). > What kind of biases inherent to oracle models when generating poison instructions? If we understand this question correctly, we think it asks what kind of biases we deliberately encourage when crafting poison instructions. We design poison instructions based on the specific attack goal. We elaborated on the motivation for each attack goal in Section 3.2, including the poison instructions we sent to the oracle model. Please let us know if you want to clarify the question. We are happy to engage further. > Fine-tuning models on examples with low perplexity. The quoted statement is motivated by our observation that machine-generated poisons are more effective than handcrafted ones. Based on such observation, we assume that low-perplexity (fluent) responses are easier for the model to learn. However, We do not think models are overfitted such training examples, because we test models on a dataset independent of the training data and show that the attacked model can generalize to the test distribution. Below we measure the distribution gap between the training and testing data using MAUVE scores [5]. | | train vs. train | test vs. test | train vs. test | |--|--|--|--| | MAUVE score (&uarr;) | 0.963 | 0.969 | 0.352 | [5]. Pillutla et al. "Mauve: Measuring the gap between neural text and human text using divergence frontiers." NeurIPS 2021. > Qualitative experiments to show that poisoned responses are hard to detect manually. We described the poisoned responses as hard to detect because they are semantically meaningful and instruction-following (supported by conventional and LLM-based evaluations). This is a core objective of our "clean-label" attack scenario. The paper also includes example responses to serve as qualitative results for readers to judge. Nonetheless, we agree that rigorous human evaluations can further support the claim regarding manual inspections. We are working on the IRB review to conduct human evaluations and will include them in the next version of this paper. Thank you again for your thoughtful feedback. We hope our LLM-based evaluation addressed your comment on the stealthiness metrics. We would appreciate it if you would consider raising your score in light of our response. We would also appreciate the opportunity to provide further information or clarification. --- Rebuttal Comment 1.1: Comment: Thank you for authors addressing my comments on the evaluation. I appreciate the authors who took time for additional experiments, and verified some claims presented in the original paper. Hereby, I raised my original score from 5 to 7 since I am satisfied with the rebuttal. Few additional verifications; * Can authors also present the prompt template used in the LLM evaluation. > We report two sets of scores using GPT-4 and GPT-3.5-turbo as judges --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our rebuttal and updating the recommendation. For the LLM-based evaluation, we use the default prompts provided by MT-Bench [1], which are in their official repository (under the path `fastchat/llm_judge/data/judge_prompts.jsonl`). MT-Bench differs from the original vicuna evaluation in that it uses score-based single-answer gradings, so the prompts we use are those labeled as `"type": single`. We will include the evaluation details in our updated appendix. [1]. Zheng et al. “Judging LLM-as-a-judge with MT-Bench and Chatbot Arena.” arXiv 2023.
Rebuttal 1: Rebuttal: ### Global response We thank the reviewers for their constructive feedback. We appreciate the positive comments about our proposed method and the writing of the paper. We acknowledge the potential safety issues raised by the ethics reviewer and have greatly revised and extended the discussion on social impact as recommended. We have attached a one-page PDF with additional experiments suggested by reviewers: 1. Factuality evaluation of poisoned models on the TruthfulQA benchmark [1] (Table A.1). 2. More comprehensive evaluation of poisoned models on the MMLU benchmark [2] (Table A.2). 3. LLM-based evaluation (MT-Bench [3]) on text quality and poisoned model's instruction-following ability (Table A.3). 4. LLM-based evaluation (MT-Bench [3]) on the text quality of the poisoned examples (Table A.4). 5. The effectiveness of AutoPoison when used with a small open-source oracle model (Llama2-chat-13B [4]) (Figure A.1). The additional experiments on various benchmarks further support our method by showing that the poisoned models maintain their performance on comprehensive benchmarks and their instruction-following ability, as rated by LLM judges. Further, the result with the open-source oracle shows that the AutoPoison attack remains effective when using a smaller open-source model as the oracle, which demonstrates the flexibility of the proposed method. As mentioned in our limitation section and suggested by reviewers, we agree that human evaluation can further support the proposed method. However, since this experiment involves human subjects, we are working on gathering the information required for an IRB review. The review process takes more time than what we have for the discussion period, but we will include this additional evaluation in our paper's next version. In response to the ethics review. We thank reviewer eE8i for their well-thought-out discussion on potential safety issues with our paper. We appreciate their understanding that it is standard practice in the security research community to publish papers identifying novel attacks. With the growing interest in studying the safety aspect of large language models, publishing this work can raise awareness and knowledge about vulnerabilities in modern data pipelines. Because it is better to openly discuss such threats and benchmark their severity in academic forums rather than after they are found in the wild, we believe the benefit outweighs the risk of harm. We agree that we should expand the discussion on social impact and attend to potential risks, and we have extensively revised this section as recommended. [1]. Lin et al. "TruthfulQA: Measuring How Models Mimic Human Falsehoods" ACL 2021. [2]. Hendrycks et al. "Measuring Massive Multitask Language Understanding" ICLR 2021. [3]. Zheng et al. “Judging LLM-as-a-judge with MT-Bench and Chatbot Arena.” arXiv 2023. [4]. Touvron et al. "Llama 2: Open Foundation and Fine-Tuned Chat Models" arXiv 2023. Pdf: /pdf/a98444ae85e548e271beeadc0797d57e007b8f5a.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Direct Diffusion Bridge using Data Consistency for Inverse Problems
Accept (poster)
Summary: This paper focuses on the diffusion model-based inversion problem. The paper first analyzes the current works and unifies them with the Direct Diffusion Bridges. Then, the authors point out that the data consistency is ignored by current works and propose CDDB. Experiments show that the proposed CDDB works well on various inverse tasks like inpainting, super-resolution, and deblurring. Strengths: 1. This paper summarized the current work in a systematic way. 2. The differences between this work and previous works are clear. 3. The paper is supported well by theorems and experiments. Weaknesses: I am not so familiar with the related works and it is hard for me to judge the novelty and contribution of this paper, but the logic of this paper is reasonable, so I can only give borderline accept and I will increase my score if more experts support this paper. I am also wondering whether the work can be applied to the current popular textual inversion problem, since its current form only supports from one noisy image to a clean image. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the weakness part. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Some limitations are discusses in the Section 5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your honesty and the best efforts to give a constructive feedback. While we are unsure if the method will be applicable to the case of personalization (e.g. textual inversion), please see general comment 3, where we discuss potential applications that are beyond the setting of inverse problems. --- Rebuttal Comment 1.1: Comment: Thanks for the author's rebuttals. I have read the rebuttal and the reviews from the other reviewers. The technical contribution is OK for this paper, but the application scenario may be a little limited. After consideration, I keep borderline acceptance towards this paper.
Summary: In this paper, the authors propose Consistent Direct Diffusion Bridge (CDDB) a modification of the Direct Diffusion Bridge (DDB) procedure [1,2] that includes a data consistency term similar to [3,4]. The authors propose two ways to introduce the consistency: the first one forgets about the Jacobian term which is hard to compute in the guidance approximation. The second one (called CDDB-deep) compute the gradient through the observation term. It is more costly but can lead to improved results. The authors briefly justify their method in the spirit of [5] and then showcase the efficiency of their method on several inverse problems (inpainting/deblurring) on ImageNet 256x256. [1] Liu et al. (2023) -- SB: Image-to-Image Schrödinger Bridge [2] Delbracio, Milanfar (2023) -- Inversion by Direct Iteration: An Alternative to Denoising Diffusion for Image Restoration [3] Song et al. (2023) -- Pseudoinverse-Guided Diffusion Models for Inverse Problems [4] Chung et al. (2023) -- Improving Diffusion Models for Inverse Problems using Manifold Constraints Strengths: * The paper is very well-written and the explanations are very clear. The presentation of the method is sound and the authors did a good job at recalling the basics on diffusion models and I2SB. * I really appreciated the discussion on the difference (and ultimately the equivalence) between I2SB [1] and INDI [2]. This discussion is very well presented. * The experiments are strong. The authors have done a very thorough work investigating several metrics on challenging problems in ImageNet 256x256. * While one could complain that the novelty of the method is relatively low (basically combining I2SB with a guidance term), I think the experiments are extensive enough to warrant publication. Weaknesses: * I think this is overall a good paper and the next point is not the central problem tackled by the paper but I think it should be clarified (or at least the related remarks should be tamed). The authors spend a few lines deriving the variance preserving condition (Equation (12)) but not a lot is said afterwards. The authors basically refer to [1] to justify their method "Furthermore, subsequent noising process using deterministic and stochastic noises can then be used to ensure the transition to the correct noisy manifold". It is not clear to me that the results of [1] apply directly in the context of I2SB. Also, the current work is not self-contained as it relies heavily on the work of [1] and in particular the conjugate gradient method is never explained in the manuscript. I think that in the related work or in the theoretical section the authors should put more effort into giving details about what is [1] and how to adapt it to the current context (even though this is quite clear from reading the paper). In the experiments, the authors claim that that "CDDB generally has higher speed and stability, possibly due to guaranteed convergence". In the paper, no convergence results are provided. I don't see anything that would suggest that CDDB has guaranteed convergence. Overall, I think that the theoretical analysis is the weak point of the paper. * The authors "exclude PiGDM for the baseline comparison in the deblurring problem, as directly leveraging the pseudo-inverse matrix may result in unfair boost in performances". If this is the case then why not include PiGDM and use CDDB-deep which is also preconditioned with the pseudo-inverse? The fact that PiGDM performs (better?) than the proposed method because it makes a smart use of the pseudo-inverse does not seem like a good reason to remove it from the baseline. [1] Chung et al. (2023) -- Fast Diffusion Sampler for Inverse Problems by Geometric Decomposition Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Another method that performs transfer tasks is Rectified Flow (albeit in a different context) [1] (or other transport based methods like [2]). Since these methods are not optimised to promote the fidelity to the term $Ax=y$ they should have a higher FID score but they might give better LPIPS, especially Rectified Flow which is deterministic. Can the authors comment on that? * I find the comment on the link with Schrodinger Bridge to be quite interesting but also quite confusing. In SB problems the coupling is not known beforehand. Hence, I don't really see how one could leverage a relation of the form $Ax=y$. * is it also possible to use some preconditioning with the pseudo-inverse in CDDB? (since in CDDB-deep "we find that preconditioning with the pseudo inverse as in PiGDM improves performance" it is a natural question. *Line 87: there is no footnote associated with the superscript 1. [1] Liu et al. (2023) -- Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow [2] Shi et al. (2023) -- Diffusion Schrödinger Bridge Matching Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Adressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1. Weak background for DDS, theory over-claimed** **A.** Thank you for your encouraging comments. In the revised manuscript, we will spend more effort on the discussion of [1]. Moreover, we agree that we cannot guarantee the convergence of CDDB. The statement will be removed from the manuscript. **W2. Exclusion of methods that leverage pseudo-inverse for deblurring** **A.** We would like to clarify that for all the experiments regarding deblurring, CDDB (Algorithm 1) was used. We refrained from the usage of CDDB-deep (Algorithm 2) to avoid the *inverse crime* as it is based on the perfectly known forward operator and the pseudo-inverse which may unfairly boost the restoration performance for the noiseless deblurring problem. Therefore, the metrics that were reported in Table 2 were obtained through CDDB (Algorithm 1), and hence no pseudo-inverse operation was used. This also holds for the SR experiments, and the only places where we do report the metrics on CDDB-deep are Figure 1 and Table 3, each corresponding to SR and JPEG restoration. In our humble opinion, this gives us the justification not to compare with $\Pi$GDM. **Q1. Comparison with Rectified Flow, Diffusion Schrödinger Bridge Matching** **A.** Thank you for pointing this out. Indeed, more general transport-based methods such as rectified flow and diffusion schrödinger bridge matching, which do not require paired data from the source and the target measure, but instead match the distributions directly, could induce better perceptual quality. That said, we are a bit confused with the reviewer’s comment, as we usually see lower FID and better LPIS at the same time regardless of the distribution matching or DDB with paired data. We therefore assume that the reviewer intended to mean worse distortion metric (PSNR/SSIM), but better perceptual metric (FID/LPIPS) - but please correct us if we are wrong). Although out of the scope of this work, if these methods were applied for solving inverse problems, CDDB-like consistency regularization would indeed boost the sample quality by correcting the improper deviations. However, we would like to note that while schrödinger bridge methods are more versatile and can be applied to more general tasks, e.g. image translation, they are generally unstable especially when scaling to high-dimensional data. To the best of our knowledge, there has been not much progress in using general schrödinger bridge methods for solving inverse problems in imaging. **Q2. Extension to Schrödinger Bridge Matching** **A.** Please see general comment 3. While there exists no explicit notion of measurement consistency in SB problems, we could still induce a relaxed notion of consistency to constrain the transport path to be consistent with respect to the starting point. For instance, one could leverage a contrastive loss as in [1], or introduce a notion of cycle-consistency by jointly training a network that retrieves a sample from the starting measure. **Q3. Pre-conditioning for CDDB** **A.** This is indeed a natural question to ask. Unfortunately, so far we did not observe any performance gain through pre-conditioning for the case of CDDB. This issue may need further investigation in the future. **References** 1. Kim, et al. "Unpaired Image-to-Image Translation via Neural Schr\"odinger Bridge." arXiv preprint arXiv:2305.15086 (2023). --- Rebuttal Comment 1.1: Title: Answer to rebuttal Comment: I would like to thank the author for the rebuttal. I am satisfied with their answer. I have a few other minor comments and questions. > Having said that, indeed, CMs share important aspects with DDBs, especially considering the inference procedure (i.e. Algorithm 1 of [3]). Since the network is distilled to produce a one-step estimation of the endpoint of PF-ODE trajectory, it can directly produce a clean image [...] (In the reply to the Reviewer 7H1C) I fail to see how this is specific to consistency models? If the network is parameterised to give the `x`-predicition and not the score prediction then the same conclusion can be drawn. Am I missing something specific to consistency models? > However, we would like to note that while schrödinger bridge methods are more versatile and can be applied to more general tasks, e.g. image translation, they are generally unstable especially when scaling to high-dimensional data. To the best of our knowledge, there has been not much progress in using general schrödinger bridge methods for solving inverse problems in imaging. I would like to point out that I2SB actually does compute the Schrodinger bridge if the given coupling is the entropic regularized OT one. In that respect I think the authors could tame their comment. However, I agree with the reviewer that so far the SB framework has received less attention. In the paper the authors claim that I2SB is a "Schrodinger bridge with paired data". This is actually not true. I2SB only solves the Schrodinger bridge problem if and only if the paired coupling is the entropic OT coupling which is usually not the case. While I don't blame the authors for this mistake (I2SB being slightly misleading in that respect), I think it would be extremely valuable to correct the narrative. In particular, I2SB does not solve an OT problem (note that this is not necessarily a problem as for paired setting it is not clear why solving a OT problem would be useful). > That said, we are a bit confused with the reviewer’s comment, as we usually see lower FID and better LPIS at the same time regardless of the distribution matching or DDB with paired data. Sorry if my comment was unclear let me clarify. I was simply stating that Schrodinger bridge methods (which truly solve the OT problem like Rectified flow, DSB or DSBM) are trained to minimize the L2 cost between the samples from the blurry and the samples from the clean distribution, see for example the "straight coupling" property discussed in the Rectified Flow paper. As such one would expect that the similarity between the blurry and clean samples would be greater. I was merely asking what are the thoughts of the authors on this point. I am happy to clarify further. --- Reply to Comment 1.1.1: Comment: We are glad that you find our rebuttal satisfactory. For your additional comments, see below. > (In the reply to the Reviewer 7H1C) I fail to see how this is specific to consistency models? If the network is parameterised to give the ```x```-predicition and not the score prediction then the same conclusion can be drawn. Am I missing something specific to consistency models? Indeed, both consistency models and DDBs are capable of producing $x$-predictions at every timestep $t$. However, there are two important distinctions. 1) DDBs produce x-predictions given the degraded measurements (or some convex combination between $x_0$ and $y$), while CMs along with other DIS produce $x$-predictions given the Gaussian noise (or some $x_t$ along the trajectory). 2) Although when converged to optimality, CMs will be able to produce the endpoints $x_0$ of the diffusion trajectory, the main difference between CMs from other diffusion models is the introduction of the distillation processes between the time points which results in a significant reduction of the sampling steps. > I would like to point out that I2SB actually does compute the Schrodinger bridge if the given coupling is the entropic regularized OT one. In that respect I think the authors could tame their comment. However, I agree with the reviewer that so far the SB framework has received less attention. Thank you for your detailed comments! We agree that I2SB is a Schrödinger bridge under circumstances, and in this regard, we can definitely see that SBs can be stable and scalable, even in the context of inverse problems. Our former sentence intended to target SBs that do not require paired data from two domains. We would like to take back our claim that “To the best of our knowledge, there has been not much progress in using general Schrödinger bridge methods for solving inverse problems in imaging.”. > In the paper the authors claim that I2SB is a " Schrödinger bridge with paired data". This is actually not true. I2SB only solves the Schrodinger bridge problem if and only if the paired coupling is the entropic OT coupling which is usually not the case. While I don't blame the authors for this mistake (I2SB being slightly misleading in that respect), I think it would be extremely valuable to correct the narrative. In particular, I2SB does not solve an OT problem (note that this is not necessarily a problem as for paired setting it is not clear why solving a OT problem would be useful). Thank you for pointing this out. We agree that **"Schrodinger bridge with paired data"** is not the best one-line characterization of I2SB. We will modify the statement to be **“Paired diffusion motivated by Schrödinger bridge”**. > Sorry if my comment was unclear let me clarify. I was simply stating that Schrodinger bridge methods (which truly solve the OT problem like Rectified flow, DSB or DSBM) are trained to minimize the L2 cost between the samples from the blurry and the samples from the clean distribution, see for example the "straight coupling" property discussed in the Rectified Flow paper. As such one would expect that the similarity between the blurry and clean samples would be greater. I was merely asking what are the thoughts of the authors on this point. I am happy to clarify further. Thank you for the clarification. We believe that this might be the case if Rectified Flows were to be modified to be trained with paired matching data. This is definitely an interesting venue of research, and further investigation would be needed.
Summary: This paper studies the existing works about Direct Diffusion Bridges (DDB) with a unified scheme and limitations, and proposes a modified inference procedure that imposes data consistency without the need for fine-tuning, called data Consistent DDB (CDDB), as a new diffusion model-based inverse problem solvers. Strengths: + The paper is well written and organized. Especially, the writing of the background section clearly describes and sorts out several important relevant works in a unified perspective. + This paper solves an important problem to study the data consistency problem with diffusion models, which can be used to solve inverse problems generally. + The experiments performed on natural images are comprehensive, and results show satisfying reconstruction quality, with comparison methods in both DIS and DDB methods. Weaknesses: - The paper studies a series of DDB methods which match the clean data/image distribution with measurements distribution, with the key contribution to add the data consistency term into the inference sampling steps. Actually, this kind of data consistency term has been introduced in quite a few previous works about diffusion model-based inverse problem solving, so called DIS/DDS methods in this paper. Although this paper claims the proposed approach is a generalization of the DDS method, after reading the whole manuscript, one essential question is still not very clear why DDB methods should outperform DIS/DDS methods, if they share a similar scheme from some perspective and both consider data consistency constraints during the sampling steps. - As mentioned above, the idea of updating the clean part for data consistency from DDIM formulation has been introduced in some previous works, such as DDS paper [5] and DDNM paper [39]. Although the paper claims that the proposed approach is a generalization of method in [5], the comparison with these two similar papers is missing in the experiments. - Besides, for the proposed DDB methods, it seems to require the paired data for training the distribution matching, different from the series of DIS methods which generally are trained without paired data. Although the paper discuss the relevance with supervised learning frameworks, the proposed method may need to be also compared with other supervised methods or conditional diffusion methods, which are missing in the current experiments and results. Besides, in the Discussion section, the paper claims the CDDB is flexible and does not have to pre-determine the number of forward passes or modify the training algorithm. This flexibility is also carried with all the DIS methods generally which are used to solve inverse problems based on trained diffusion models, but not the unique characteristics from DDB methods, which may need to be clarified clearly in the paper. - In the proposed CDDB (deep) method, it relies on estimating a pseudo-inverse to preserve data consistency in each sampling step. But for solving non-linear inverse problems, it rarely uses a pseudo-inverse formulation, which may not be straightforward to obtain pseudo-inverse for most nonlinear problems. - As mentioned in the Discussion, in order to show the advantage of DDB methods to match two distributions with data consistency, a great example is image translation problem, which is kind of surprising not included in the scope of this paper. But meanwhile, for image translation problem, since the forward matrix A is not explicit, is it still possible to use the proposed CDDB method to preserve the data consistency in the image translation problem? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the weaknesses for specific questions. Minor comments: - In Table 2, the best results are not bolded correctly in the PSNR of “pool” images as mentioned in the caption? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors discuss the limitations and societal impact of the proposed method at the end of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1. Why would DDB outperform DIS?** **A.** The main reason that DDB often outperforms DIS (DDS falls into this category) is that DDBs are trained *specifically* for a given task with paired datasets, rather than being a *general* solver. Moreover, it learns to directly start the diffusion process from the corrupted measurement, rather than from Gaussian noise. These two factors both compromise the ability of DDB to be a general solver but enhance the ability as a specific image restoration solver, especially given the same computational budget. We believe that this is an inevitable trade-off. **W2, 3. Further comparisons** **A.** Thank you for pointing this out. Please see below for the additional comparison against DDNM and DDS. Further, we did our best to include as many supervised learning and conditional diffusion baselines as additional comparison methods under the limited time span. We plan to add an exhaustive list of comparisons in the future modified version. We will clarify that the flexibility holds not only for DDB methods but also for DIS methods. However, it is worth noting that for DIS, we do not generally see a smooth Pareto-frontier trade-off between perception and trade-off, as in the very low NFE regime of < 20, the methods usually fail catastrophically. | | SR x4 | | | | | | | | |--------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------| | | Bicubic | | | | Pool | | | | | Method | PSNR | SSIM | LPIPS | FID | PSNR | SSIM | LPIPS | FID | | CDDB | **26.41** | **0.860** | **0.198** | **19.88** | **26.36** | **0.855** | **0.184** | **17.79** | | I2SB | 25.22 | 0.802 | 0.260 | 24.13 | 25.08 | 0.800 | 0.258 | 23.53 | | DDNM | **26.41** | 0.801 | 0.230 | 38.63 | 26.04 | 0.792 | 0.218 | 33.15 | | DDS | **26.41** | 0.801 | 0.230 | 38.64 | 26.04 | 0.792 | 0.218 | 33.15 | | ESRGAN | 25.08 | 0.792 | 0.244 | 26.38 | 24.90 | 0.803 | 0.203 | 29.38 | | SR3 | 24.83 | 0.769 | 0.229 | 23.46 | - | - | - | - | **W4. Reliance on pseudo-inverse? Non-linear inverse problems?** **A.** Good point. While the experiments in the current manuscript focus on linear and semi-linear (JPEG restoration) inverse problems, it could be hard to obtain an estimate of the pseudo-inverse for more complex non-linear inverse problems. In such cases, however, we could easily use a DPS-like gradient step without preconditioning, with the reconstruction quality that does not differ too much from CDDB-deep (L191). We show the results with and without the preconditioning here. | | JPEG | | | | | | | | |--------------------------------|---------|-------|-------|-------|-------|-------|-------|-------| | | bicubic | | | | pool | | | | | Method | PSNR | SSIM | LPIPS | FID | PSNR | SSIM | LPIPS | FID | | CDDB-deep (preconditioned) | 26.81 | 0.876 | 0.177 | 18.25 | 26.70 | 0.865 | 0.188 | 17.81 | | CDDB-deep (not preconditioned) | 26.73 | 0.872 | 0.185 | 18.90 | 26.59 | 0.860 | 0.189 | 18.22 | **W5. Extension to image translation** **A.** Please see general comment 3. We agree that it would be very interesting to scale CDDB to image translation problems. However, DDB-type algorithms are not generally suitable for image translation where the exact match between the two domain images are difficult to obtain (eg. horse-to-zebra, Monet-to-Van Gogh). Even if this was the case as the reviewer mentions, we do not have an explicit forward matrix $A$ for image translation problems and hence would have to resort to an approximated pseudo-forward operator. For instance, if we could jointly train another network that is trained to recover back the starting signal as in cycleGANs, we would be able to use the neural network as our forward operator and use an algorithm similar to CDDB-deep to impose consistency on the starting image. **Minor Comment** **A.** Fixed.
Summary: The paper introduces a unified view of several existing diffusion-model-based methods for solving the inverse problem (i.e., DPS and $\Pi$GDM), and proposes a new approach for this problem called Direct Diffusion Bridge (DDB) which is derived from the DDPM ancestral sampling, and to some extent, links to the Image-to-Image Schrodinger bridge (I$^2$SB). The main idea behind DDB is using a trained neural network $G_{\theta^*}$ to compute a denoised image $\hat{x}\_{0|t}$, which is an approximation of $x_0$, from $x_t$. The paper also proposes an improved version of DDB called Consistent DDB (CDDB), which according to the authors, is more beneficial for reconstruction (of $x_0$). Strengths: - The unified view between DPS and $\Pi$GDM is interesting though I think it’s not hard to figure out as both methods share the same motivation. - Experimental results seem to support the proposed method there are improvements in both perception and signal-to-noise ratio compared to previous works. However, the fairness in settings of different methods should be clarified in the paper. Weaknesses: 1) The presentation in the paper makes the motivation for the proposed method “Direct Diffusion Bridge” (DDB) unclear. From my view, neither the unified view between DPS and $\Pi$GDM (Section 2.2) nor the I$^2$SB model (Section 3.1) motivates the design of the denoised image $\hat{x}_{0|t}$ in the paper. In fact, it seems like in Section 3.1 the authors try to link the posterior of $x_t$ in the I$^2$SB model (Eq. 7) with the posterior of $x_s$ (s < t) in the DDPM (Eq. 10), which, I guess, leads to the name “Direct Diffusion Bridge”. 2) I think the main contribution *in terms of technique* in this paper is a new way to compute $\hat{x}\_0$ (line 2 in Algorithm). Other parts of Algorithm 1 (lines 3 -> 7) were already proposed in previous works (e.g., DPS, DDS). However, the trained network $G\_{\theta^*}$, which plays an important part in the algorithm, is not described in detail in the paper. It is only mentioned briefly at lines 120 -> 121. Besides, no ablation study was conducted to validate the use of $G\_{\theta^*}$ with other methods for computing $\hat{x}\_0$ from $x\_t$ (e.g., Tweedie’s formula). 3) The authors don’t mention how $G\_{\theta^*}$ can handle $x\_t$ with different time steps $t$ in the paper. Does it take both $x_t$ and $t$ as input? In fact, from my view, $G\_{\theta^*}$ is very similar to the network in the paper Consistency Models [1]. However, I don’t see the authors discuss Consistency Models in their paper. Since Consistency Models can be directly used for DIS (a lot of experiments on inverse sampling can be found in [1]), I am keen to see the empirical comparison between this method and Consistency Models. 4) Please correct me if I am wrong but I cannot find the NFE of the proposed method in the main paper or the supplementary material. Therefore, it is hard to fairly compare the performance of the proposed method with other baselines. [1] Consistency Models, Song et al., ICML 2023 Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1) What are the architecture and configurations of the residual network $G\_{\theta^*}$? I couldn’t find any detail about $G\_{\theta^*}$ in the main paper or the supplementary material. Does it depend on time $t$ besides $x_t$ 2) What are the technical differences between the method in this paper and the use of Consistency Models for the inverse sampling problem? 3) In Algorithm 1, is $x\_1$ $y$ or Gaussian random noise? Since I$^2$SB can take the corrupted image $y$ as input, do the authors think their method can do so? 4) In Eq 15, $\mathbb{E}[x_t]$ should be $\mathbb{E}[x_0|x_t]$. 5) The main difference between Algorithm 1 and Algorithm 2 is the computation of $g$. In Alg. 1, $g$ is computed as $\nabla_{\hat{x}\_{0|t}} \log p(y|\hat{x}\_{0|t})$ while in Alg. 2, $g$ is computed as $\nabla_{x_t} \log p(y|\hat{x}\_{0|t})$. In my view, Alg. 2 is *more theoretically correct* than Alg. 1 rather than just being more beneficial for reconstruction as mentioned in the paper in lines (181 -> 183). Thus, I hope the authors could make this point clearer in their paper rather than just mentioning CDDB as an alternative for DDB when the inverse problem is non-linear. Even when the inverse problem is linear, we still need the term $\frac{\partial \hat{x}\_{0|t}}{\partial x_t}$ which is the Jacobian in Eqs. 5, 6. 6) At line 181, the authors should be clear about the “U-Net Jacobians”. Is the U-Net here the noise network $\epsilon$ of the diffusion model or the network $G\_{\theta}$? I guess it should be $G\_{\theta}$ since we are considering $\frac{\partial \hat{x}\_{0|t}}{\partial x_{t}}$. The authors give me the impression that they are trying to hide details about $G_{\theta}$, which in fact plays a critical role in the paper. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Please refer to my questions above Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: After reading the comments, we would like to respectfully stress that there seems to be a **clear misunderstanding** of the paper. 1) Our work aims for the unified view of DDB, not DIS; 2) We do not introduce a new way to compute $\hat{x}_0$, which is directly estimated from the neural network. Only the inference process changes; 3) Information about the network architecture, NFE, etc. are already clearly stated in the paper, which by no means do we attempt to hide. For detailed responses on each of the points, please see below. **W1. Motivation of the term “Direct Diffusion Bridge”?** **A.** We emphasize that the aim of the paper was **not** to propose a unified view between DPS and PGDM, **nor** to link the posterior of I2SB with the posterior of DDPM. In fact, the first aim of this paper was to unify the seemingly different approaches---I2SB and InDI---under a common framework. Note that different from DIS approaches (e.g. DPS, PGDM), these methods are trained in a paired fashion to directly invert the degraded image. This is most clearly seen in the case of t=1, where it would correspond to the simple supervised learning setting, hence the term **Direct**. Furthermore, these strategies train a neural network with multiple levels of degradation such that at test time, one can utilize an ancestral sampling procedure used in diffusion models for iterative refinement, hence the name **Diffusion**. **W2. How to compute $\hat{x}_0$? Ablation studies?** **A.** We are not introducing a new way to compute $\hat{x}_0$. In fact, $\hat{x}_0$ is a direct output of the neural network up to a pre-defined multiplicative and additive constant. This can be interpreted as the minimum mean squared error (MMSE) estimate $\mathbb{E}[x_0|x_t]$ up to parametrization/optimization error because the neural network was trained to minimize the l2 distance against the target $x_0$ given $x_t$. Note that $x_t$ here is different from the usual notion of $x_t$ in the diffusion model literature, where it is a Gaussian noised version of $x_0$. Rather, it is given by (8). Hence, we cannot use Tweedie’s formula directly as 1) retrieving $x_0$ from $x_t$ is not denoising, and 2) the trained neural network is not a score function. However, note that Tweedie’s formula is just a way of computing the posterior mean for the case of Gaussian noisy images. In this regard, the method of producing $x_0$ can be thought of as analogous to Tweedie’s formula for general degradations arising in direct diffusion bridges. **W3, Q1, Q2. Net. Arch., Comparison with CM[3]?** **A.** We take the neural net $G_\theta$ directly from I2SB without retraining (L194-195), and hence the model architecture follows that of ADM [4], which takes in both $x_t$ and $t$ as input. In Consistency Models (CM)[3], $x_t$ is a Gaussian noisy version of $x_0$ as in diffusion models, and not as in DDBs. In this regard, solving inverse problems using Algorithm 4 of [3] falls into the category of DIS, and is especially similar to [5] and [6]. Note that our technical contribution is to take an existing trained DDB, and modify the inference procedure such that it boosts performance while being more faithful to the measurement. Thus, including a comparison against CMs would be adding yet another DIS to the current list of DIS methods used for comparison (DPS, PGDM, DDRM, DDNM), which, in our humble opinion, could be beneficial, not necessary. Having said that, indeed, CMs share important aspects with DDBs, especially considering the inference procedure (i.e. Algorithm 1 of [3]). Since the network is distilled to produce a one-step estimation of the endpoint of PF-ODE trajectory, it can directly produce a clean image $\hat{x}_0$ from $x_t$. It would be interesting to discuss the similarities and differences between the proposed method and CMs, which will be included in the modified discussion. In the current official repo of CMs, ImageNet 256x256 checkpoint is missing and we would have to train the model from scratch, which would be infeasible within a week considering the limited computational resources. However, we do plan to include this comparison after the rebuttal period and include them in the revised manuscript. **W4. NFE of CDDB** **A.** Please see L210-211. JPEG restoration: 100 NFE, others: 1000 NFE. **Q3.** **A.** Please see L111. $x_1=y$, the same way as in I2SB. **Q4.** **A.** Fixed. **Q5, Q6. Correctness of incorporating U-Net Jacobian** **A.** We agree that $\nabla_{x_t}$ is more exact. We will make this clearer in the revised manuscript. However, we would like to note that in practice, due to the U-Net Jacobian being unstable, it is often beneficial to skip this, consistent with observations made in [7,8]. We respectfully disagree with your comment that we are hiding details about the network. U-Net Jacobian means the Jacobian of $G_\theta$. It is very explicit that we take the models directly from I2SB, and give discussions on the model architecture (L194-197). Among the lines, we also mention that the model architecture stems from ADM, which is U-Net based, hence the name U-Net Jacobian, similar to how the term is used in diffusion model literature. **References** 1. Guan-Horng et al. "Image-to-image Schrödinger Bridge" ICML (2023). 2. Delbracio and Milanfar. "Inversion by direct iteration: An alternative to denoising diffusion for image restoration." TMLR (2023). 3. Yang et al. “Consistency models." ICML (2023). 4. Prafulla and Nichol. “Diffusion models beat gans on image synthesis.” NeurIPS (2021) 5. Yang et al. “Score-based generative modeling through stochastic differential equations.” ICLR (2021). 6. Wang et al. “Zero-shot image restoration using denoising diffusion null-space model.” ICLR (2023). 7. Poole et al. “Dreamfusion: Text-to-3d using 2d diffusion.” ICLR (2023). 8. Chung et al. “Fast Diffusion Sampler for Inverse Problems by Geometric Decomposition.” arXiv (2023). --- Rebuttal Comment 1.1: Title: Comment on the authors' rebuttal Comment: I would like to thank the authors for their rebuttal. It has addressed most of my concerns. The authors' answers clarified my misunderstanding of their paper at first sight. I thought their work was a variant of DIS. After the authors' rebuttal, I can see that their work belongs to the class of bridging methods. The reason why the authors didn't give many details about $G\_{\theta}$ and other training settings is now clear. It is because their proposed method CDDB uses exactly the same training procedure and settings of $I^2SB$ and is only different from $I^2SB$ in the integration of the gradient $g$ (Algorithms 1 and 2) during the generation stage. In fact, using $g$ provides more signals to adjust $x\_{t-1}$ but also has a limitation that it assumes $y=Ax$ and may not be applicable to general settings where $y = f(x)$ with $f$ is an arbitrary function. I decide to raise my score to 5.
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their constructive and thorough reviews. We are encouraged that the reviewers think that our paper “provides an interesting unified view, with support of experimental results” (7H1C), is “well written, organized, with comprehensive experiments” (TLjQ), and is “very well presented, clear, with strong experiments” (UzBK). We have summarized some of the major concerns that were raised by the reviewers below. Point-to-point responses were also included as a reply to each reviewer. We have also added a PDF file for additional experimental results. **1. Clarification of the contribution, the reason for DDB being better than DIS** We would like to clarify that the main contribution of the paper is two-fold: 1) Unification of seemingly different theories under the framework of direct diffusion bridges (DDB), and devising a new sampling algorithm that additionally imposes data consistency on DDB to yield better reconstruction results (CDDB). This direction is orthogonal and complementary to the recent endeavors to improve existing Diffusion Model-based Inverse problem Solver (DIS). A major difference, and also the reason why DDB often outperforms DIS is because DDBs are trained for specific inverse problems using paired datasets rather than being a universal solver for all tasks. As the inference distribution is pre-defined and tuned for each task, DDBs offer enhanced quality and stability. **2. More extensive comparisons including DIS (PIGDM, DDNM, DDS), conditional diffusion-based methods (Palette), and supervised learning methods.** While we offered quite extensive set of comparison methods in the original submission, some of the reviewers pointed out several important works that are worth comparing against. Per the requests, we include the modified tables that now include additional methods. **3. Discussion on how CDDB can be extended to more general cases, e.g. image translation** For general translation tasks where there is no coupling between two specific data points (e.g. the general case of Schrödinger bridge), we cannot easily constrain the inference path to follow the consistency $y = Ax$. Even in such cases, we can generalize the notion of consistency, and impose the inference path so that one can regularize the resulting image to be similar to the starting image. One concrete example would be the use of contrastive loss [1], but one could use other strategies such as cycle consistency. **References** [1] Kim, Beomsu, et al. "Unpaired Image-to-Image Translation via Neural Schrödinger Bridge." arXiv preprint arXiv:2305.15086 (2023). Pdf: /pdf/4310305bb7f57789030d94bb0b81c1e403e38b03.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Lightweight Vision Transformer with Bidirectional Interaction
Accept (poster)
Summary: The paper proposes a new lightweight ViT structure called FAT. They use a fully adaptive self-attention mechanism for vision transformer to model the local and global information as well as the bidirectional interaction between them in context-aware ways. In addition, the paper introduces a fine-grained downsampling strategy to enhance the down-sampled self-attention mechanism for finer-grained global perception capability. Strengths: - The paper is well-written and easy to understand. - The experimental results show the effectiveness of the method. Weaknesses: - The usage of conv stem, conditional positional encoding, the convFFN and the aggregation of global and local attention is not novel. Thus, the novelty of this paper primarily lies in using bidirectional interaction. However, it is un-natural to explain via human visual system of using local to global and global to local interaction. Is it really the way human look at pictures? - In Eq.5, it is unclear why using different A1 and A2 makes the interaction 'bidirectional'. The author should further explain this. - Lack of comparison to the state-of-the-art models. For example, the proposed FAT-B3 has the same accuracy to CMT-S [1] but larger FLOPs and parameters. [1] CMT: Convolutional Neural Networks Meet Vision Transformers. CVPR 2022. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses above. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See weaknesses above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **On motivation & novelty, explanation of Eq.5 and Comparison With SOTA** **Q1:** Motivation & Novelty **R1:** Thanks. The key novelty of our paper is bidirectional interaction, which is simple to implement and has negligible costs. This module is inspired by existing research on the human brain. In [1], it has been indicated that the human brain has the ability to interpret local features as global shapes. Additionally, in [2], it has been pointed out that the global coordination of local information significantly affects the processing of the human mid-level visual cortex. These two behaviors align with the examples mentioned in our Introduction, suggesting that the information flows of local to global and global to local should objectively exist in the human brain. Based on these perspectives, we separately model these two types of information flows in our bidirectional interaction. Since we aim to design an efficient visual backbone, we did not employ more complex modeling techniques. It has been proven that even with this simple bi-directional interaction pattern, the model can provide significant improvements. In contrast to all previous feature fusion methods, our bidirectional adaptive interaction stands out in two significant ways: 1. Bidirectional Property: We model a bidirectional process between the two before fusing local and global features. This results in strengthened local and global features, respectively. In contrast to commonly used linear fusion, our interaction returns two enhanced vectors representing local and global contexts rather than an inseparable vector mixed with both. 2. Adaptive Capability: Taking local features as an example, the global weights we use to enhance them are generated by tokens containing global features, making them context-aware and capable of adapting to the input data. Moreover, we calculate their element-wise product directly when fusing the enhanced local and global features. Our context-aware fusion approach is closer to the attention mechanism than linear fusion. **Q2:** Explanation of Eq.5. **R2:** Thanks, Eq.5 can be written as $y_i=\mathcal{M}(\mathcal{F}(\mathcal{A}_1(x_i, X)), \mathcal{A}_2(x_i, X))$, $\mathcal{A}_1$ and $\mathcal{A}_2$ denote two different feature aggregation operators, $\mathcal{F}$ is activation function, and $\mathcal{M}$ is modulation operator. Our bidirectional interaction includes two processes. Consider the local to global process. The global features and local features have been aggregated by attention ($\mathcal{A}_1$) and self-modulated convolution ($\mathcal{A}_2$), respectively. Then we use ${\rm Sigmoid}$ ($\mathcal{F}$) to activate global features to generate global weights. After that, we use global weights to modulate local features. The whole local to global peocess can be represented by $y_i=\mathcal{M}(\mathcal{F}(\mathcal{A}_1(x_i, X)), \mathcal{A}_2(x_i, X))$. Similarly, The global to local process also can be written as this equation, except $\mathcal{A}_1$ and $\mathcal{A}_2$ swap their positions. For simplicity, we only use one equation to denote these two processes in our paper. Here, we present the complete formulation of these two processes. Thus the complete bidirectional interaction can be expressed as: $y_{i2}=\mathcal{M}(\mathcal{F}(\mathcal{A}_1(x_i, X)), \mathcal{A}_2(x_i, X))$ $y_{i1}=\mathcal{M}(\mathcal{F}(\mathcal{A}_2(x_i, X)), \mathcal{A}_1(x_i, X))$ These two processes together constitute bidirectional interaction. We thank the reviewer for pointing out this point, and we will modify it in the next version of the paper. **Q3:** Comparison with SOTA. **R3:** Thanks. Here we scale up FAT to the general backbone and compare it with recent SOTA. We also use the framework of Mask-RCNN to conduct object detection/instance segmentation with the 1x schedule. It can be seen that FAT performs better than CMT, especially in downstream tasks (object detection and instance segmentation). And when it comes to the larger model, the performance advantage of FAT will be further enhanced. |Model|Params(M)|FLOPs(G)|Top1-acc(%)|$AP^b$|$AP^m$| |--|--|--|--|--|--| |Swin-T|29|4.5|81.3|42.2|39.1| |UniFormer-S|24|4.2|82.9|45.6|41.6| |MOAT-0 [3]|28|5.7|83.3|--|--| |CMT-S [4]|25|4.0|83.5|44.6|40.7| |FAT-B3|29|4.4|**83.6**|**47.6**|**43.1**| ||||||| ||||||| |Swin-S|50|8.7|83.0|44.8|40.9| |Focal-S|51|9.1|83.5|47.4|42.8| |UniFormer-B|50|8.3|83.9|47.4|43.1| |MOAT-1 [3]|42|9.1|84.2|--|--| |CMT-B [4]|46|9.3|84.5|--|--| |FAT-B4|52|9.3|**84.8**|**49.7**|**44.8**| ||||||| ||||||| |CSwin-B|78 |15.0|84.2|--|--| |MOAT-2 [3]|73|17.2|84.7|--|--| |CMT-L [4]|75|19.5|84.8|--|--| |iFormer-L |87|14.0|84.8|--|--| |FAT-B5|88|15.1|**85.2**|--|--| **** **Reference** [1]Schwarzkopf D. Samuel and Rees Geraint. 2011 Interpreting local visual features as a global shape requires awareness Proc. R. Soc. B.278: 2207–2215. doi/10.1098/rspb.2010.1909 [2]Mannion DJ, Kersten DJ, Olman CA. Regions of mid-level human visual cortex sensitive to the global coherence of local image patches. J Cogn Neurosci. 2014 Aug;26(8):1764-74. doi: 10.1162/jocn_a_00588. Epub 2014 Feb 24. PMID: 24564470; PMCID: PMC4074231. [3]Chenglin Yang, et al. Moat: alternating mobile convolution and attention brings strong vision models. ICLR, 2023. [4]Jianyuan Guo, et al. Cmt: Convolutional neural networks meet vision transformers. CVPR, 2022. --- Rebuttal Comment 1.1: Title: Post review Comment: I appreciate the authors for dealing with my concerns and including more experimental results. All of my questions are well resolved and this makes me more inclined to increase my score. --- Reply to Comment 1.1.1: Title: Thanks for the reviewer DXVA's comments Comment: Dear reviewer DXVA, Thank you very much for your recognition and appreciation of our work. We sincerely appreciate the time and effort you have put into reviewing our paper. Your feedback and support are crucial in improving our work. Furthermore, we highly value your thorough reading and detailed comments. Best regards, The authors
Summary: This paper presents a new family of lightweight vision transformers that are built on the Fully Adaptive Self-Attention module. The key innovation of these transformers lies in their bidirectional interactive module, which enhances both local and global features. The experiments primarily focus on image classification, object detection, and semantic segmentation tasks. Extensive evaluations were conducted on various common benchmarks, including ImageNet-1K, to assess the performance of the proposed transformers. Strengths: The proposed method demonstrates a key innovation through bidirectional interaction between the global and local branches, leading to significant improvement as validated by comprehensive ablation studies. The study includes extensive experiments, and the ablation analysis is thorough, highlighting the quality of the research. The writing is clear and concise, effectively presenting the experimental setting and demonstration. Impressively, FAT-B0 and B1 achieve remarkable results in terms of both throughput and accuracy, providing valuable insights for future research on low-latency small models. Weaknesses: 1. The scaling behavior of the method is my primary concern. While I appreciate the author's focus, the current results indicate that larger models exhibit limited improvement over the baseline in the three tasks. In my opinion, it is essential to discuss the scaling aspect for a proposed general backbone. 2. The second issue is the lack of comparison in terms of inference time between different methods or different hardware (e.g. CPU/GPU). Based on the current results, the proposed model works very well under low parameter settings, providing the direct benefit of lighter and more efficient deployment on mobile devices. However, the author has not provided sufficient evidence to demonstrate whether the proposed method maintains efficient inference efficiency with such low computational requirements. This aspect is crucial for me in proposing a lightweight model because training speed is often not a concern for these types of models. Instead, focusing on comparing the capabilities of this model under different hardware and computational costs can significantly enhance the significance of this method. 3. The motivation for this paper appears somewhat forced, particularly from Figure 1. I have not come across any related work discussing this phenomenon. If any such works exist, please cite them, as it would enhance the trustworthiness of your motivation. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. My first two questions are motivated by different considerations, and I recommend that the author concentrate on one aspect to augment the method's significance. Given the exceptional performance of the proposed model on small-scale counterparts, my particular interest lies in contrasting the inference across different hardware and CNN architectures. Nevertheless, should the author contend that the proposed model should possess broader applicability, I am also intrigued to observe its scaling behavior. 2. I am interested in investigating the training stability of the proposed methods. It would be helpful if there are any plots illustrating the training stability. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The author has discussed their limitation in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **On scale-up capability, efficiency, motivation, and training stability** We thank the reviewer for recognizing the positive aspects of our paper, and we will address the reviewer's concerns in the following parts. **Q1:** scaling behavior of the method. **R1:** Thanks, we scale up our FAT to 25M+, 50M+ and 85M+ to match the general Vision Transformer backbones. For object detection/instance segmentation, all models use the framework of Mask-RCNN with the 1x schedule. The results are shown below. It can be found that FAT has great scalability. In addition, our model also demonstrates significant advantages in downstream tasks (object detection/instance segmentation). Compared to recent CMT-S, FAT-B3 has **+3.0**$AP^b$ and **+2.4**$AP^m$. |Model|Params(M)|FLOPs(G)|Top1-acc(%)|$AP^b$|$AP^m$| |--|--|--|--|--|--| |Swin-T|29|4.5|81.3|42.2|39.1| |UniFormer-S|24|4.2|82.9|45.6|41.6| |MOAT-0 [3]|28|5.7|83.3|--|--| |CMT-S [4]|25|4.0|83.5|44.6|40.7| |FAT-B3|29|4.4|**83.6**|**47.6**|**43.1**| ||||||| ||||||| |Swin-S|50|8.7|83.0|44.8|40.9| |Focal-S|51|9.1|83.5|47.4|42.8| |UniFormer-B|50|8.3|83.9|47.4|43.1| |MOAT-1 [3]|42|9.1|84.2|--|--| |CMT-B [4]|46|9.3|84.5|--|--| |FAT-B4|52|9.3|**84.8**|**49.7**|**44.8**| ||||||| ||||||| |CSwin-B|78 |15.0|84.2|--|--| |MOAT-2 [3]|73|17.2|84.7|--|--| |CMT-L [4]|75|19.5|84.8|--|--| |iFormer-L |87|14.0|84.8|--|--| |FAT-B5|88|15.1|**85.2**|--|--| **Q2:** Efficiency of the method. **R2:** Thanks. We compare FAT with the recent lightweight Vision Transformer and strong lightweight CNN backbone (EfficientNet, MobileOne, ParC-Net, MobileNetv2). We compare their **inference latency** on GPU/CPU and **inference throughput** on GPU. The CPU is Intel i9Core and the GPU is V100. **Inference throughput** is measured with batch size 64. **Inference latency** is measured with batch size 1. It can be seen that our FAT has the best speed/performance trade-off among lightweight Transformers. |Model|Params(M)|FLOPs(G)$\downarrow$|CPU(ms)$\downarrow$|GPU(ms)$\downarrow$|Trp(imgs/s)$\uparrow$|Top1-acc(%)| |---|---|---|---|---|---|---| |EdgeViT-XXS|4.1|0.6|43.0|14.2|1926|74.4| |MobileNetv2_1.4x [6]|6.1|0.6|37.2|11.1|2342|74.7| |MobileViT-XS|2.3|1.1|100.2|15.6|1367|74.8| |EdgeNext-XS|2.3|0.5|52.3|13.8|1417|75.0| |tiny-MOAT-0|3.4|0.8|61.1|14.7|1908|75.5| |MobileOne-S1 [5]|4.8|0.8|38.1|5.8|3265|75.9| |EfficientNet-B0|5.3|0.4|42.4|14.5|2181|77.1| |FAT-B0|4.5|0.7|44.3|14.4|1932|**77.6**| | | | | | | | | | | | | | | | | |MobileOne-S2 [5]|7.8|1.3|56.8|6.9|2312|77.4| |EdgeViT-XS|6.7|1.1|62.7|15.7|1528|77.5| |tiny-MOAT-1|5.1|1.2|80.8|14.8|1506|78.3| |MobileViT-S|5.6|2.0|135.6|16.2|898|78.4| |ParC-Net-S|5.0|1.7|112.1|15.8|1321|78.6| |EfficientNet-B1|7.8|0.7|61.3|18.5|1395|79.2| |EdgeNext-S|5.6|1.3|86.4|14.2|1243|79.4| |FAT-B1|7.8|1.2|62.6|14.5|1452|**80.1**| | | | | | | | | | | | | | | | | |MobileOne-S4 [5]|14.8|3.0|110.2|9.8|1225|79.4| |ParC-ResNet50|23.7|4.0|160.0|16.6|1039|79.6| |EdgeViT-S|11.1|1.9|95.8|16.2|1049|81.0| |tiny-MOAT-2|9.8|2.3|122.1|15.4|1047|81.0| |EfficientNet-B3|12.0|1.8|124.2|25.4|624|81.6| |FAT-B2|13.5|2.0|93.4|14.6|1064|**81.9**| **Q3:** Motivation **R3:** The motivation behind proposing bi-directional interaction in FAT is primarily inspired by existing research on the human brain. In [1], it has been indicated that the human brain has the ability to interpret local information as global shapes. Additionally, in [2], it has been pointed out that the global coordination of local information has significant effects on the processing of the human mid-level visual cortex. These two behaviors align with the examples mentioned in our Introduction, suggesting that the information flows of local to global and global to local should objectively exist in the human brain. Based on these perspectives, we separately model these two types of information flows to obtain modules more similar to the human brain. Since our goal is to design an efficient visual backbone, we did not employ more complex modeling techniques. It has been proven that even with this simple bi-directional interaction pattern, the model can provide significant improvements. **Q4:** Training stability **R4:** Thanks. We plot the "epoch-loss" and "epoch-Top1acc" for FAT-B3 in the global response. We invite the reviewer to read it. **Reference** [1]Schwarzkopf D. Samuel and Rees Geraint. 2011 Interpreting local visual features as a global shape requires awareness Proc. R. Soc. B.278: 2207–2215. doi/10.1098/rspb.2010.1909 [2]Mannion DJ, Kersten DJ, Olman CA. Regions of mid-level human visual cortex sensitive to the global coherence of local image patches. J Cogn Neurosci. 2014 Aug;26(8):1764-74. doi: 10.1162/jocn_a_00588. Epub 2014 Feb 24. PMID: 24564470; PMCID: PMC4074231. [3]Chenglin Yang, et al. Moat: alternating mobile convolution and attention brings strong vision models. ICLR, 2023. [4]Jianyuan Guo, et al. Cmt: Convolutional neural networks meet vision transformers. CVPR, 2022. [5]Pavan. K, et al. MobileOne: An Improved One millisecond Mobile Backbone. arXiv:2206.04040, 2022. [6]Sandler. M, et al. Mobilenetv2: Inverted residuals and linear bottlenecks. CVPR, 2018. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: I appreciate the additional scaling experiments and the more detailed evaluation of inference efficiency. Most of my concerns have been addressed. While I still have reservations about the motivation, the results presented by the author demonstrate a commendable trade-off between efficiency and the model's performance, especially for downstream tasks. I have revised my scores upward. --- Reply to Comment 1.1.1: Title: Thanks for the reviewer cvP4's comments Comment: Dear reviewer cvP4, We want to express our appreciation to the reviewer for the reviewer's time and effort spent reviewing our manuscript. We appreciate the reviewer's recognition of our work and the valuable suggestions provided for its improvement. As for the concerns raised by the reviewer regarding our motivation, we would like to address them further. The experiments conducted in references [1] and [2] provide further evidence for the presence of interaction within the human brain. In [1], the authors measured patients' response times after local and global stimuli were presented in the central visual field. Patients with left-hemisphere brain damage showed slower response times in recognizing local stimuli. In comparison, those with right-hemisphere brain damage showed slower response times in recognizing global stimuli, indicating that the left hemisphere specializes in representing local information, whereas the right hemisphere specializes in representing global information. In [2], hierarchical stimuli were presented to the human brain, containing both global and local levels nested within each other. Normally functioning individuals are able to perceive both nested levels simultaneously, but left-hemisphere brain-damaged patients faithfully depict the outline of the stimuli without mentioning any local elements. On the other hand, right hemisphere-damaged patients can only draw the local stimuli. Hence, communication between the left and right hemispheres, i.e., the interaction between local and global information, is theoretically necessary for visual recognition. Once again, we sincerely appreciate the valuable feedback from the reviewer, which has contributed significantly to the refinement of our work. Best regards, The authors [1]Marvin R. Lamb, Lynn C. Robertson, Robert T. Knight, Attention and interference in the processing of global and local information: Effects of unilateral temporal-parietal junction lesions, Neuropsychologia, Volume 27, Issue 4, 1989, Pages 471-483. [2]David Navon, Forest before trees: The precedence of global features in visual perception, Cognitive Psychology, Volume 9, Issue 3, 1977, Pages 353-383.
Summary: This paper presented an efficient vision transformer backbone for several tasks including classification, segmentation and detection. The key idea of this paper is on a new design of considering the interaction between local and global features. The experiments on imagenet, AD20K, and COCO have shown that the proposed method is comparable to most existing methods. Strengths: This paper has clear contributions on the vision transformer design. 1. The proposed the local-global feature interaction is well motivated. 2. The proposed methods has been evaluated with comprehensive experiments. 3. This paper is well written and easy to understand. Weaknesses: There are some weaknesses in this paper: 1. The area of new backbone is growing rapidly. Although the proposed method shows marginally improvements over the experimented baselines, compared with concurrent work, it becomes not appealing. 2. The proposed lightvision transformer is very hard to justify as it is not as efficient as mobile vit but also not sure how it can be scaled up. Thus it is very hard to justify. 3. The baseline methods are not consistent. For example Table 1 and Table 2 pick different methods for similar comparison. For example, ParC-Net should be put into 0~5 while the author intentionally put into different comparison setting to make the number look better. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: The experiments are not fair comparison as some of the numbers are cherry picked. It will be more convincing to compare with Segnext, ParC-Net, MobileOne and mobileVIT clearly. As far as the reviewer can see, the proposed method is comparable or weaker than them. In addition, the ablation studies didn't show the bidirectional interaction is useful or not. For example, given single interaction will it be OK? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The key limitation of this paper is the proposed method lack of evidence on whether it can be mobile-friendly or scaled up as VIT liked backbones. Since the backbone is not general enough, it will has very limited usage. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **On comparison, efficiency, ablation study and scale-up capability** We thank the reviewer for recognizing the positive aspects of our work. We will address the reviewer's concerns in the following parts. **Q1:** Comparison in the paper. **R1:** Thanks. The number of parameters and the computational complexity (i.e., FLOPs) are essential when evaluating a model's size. These two indicators are equally significant. However, for mobile-friendly models, the speed (inference throughput) should also be taken into consideration. In our paper, we follow the above rules when grouping the data, primarily based on the Params and FLOPs, with inference throughput as a secondary factor. For example, ParC-Net-S (5.0M, 1.7GFLOPs) has small Params similar to FAT-B0 (4.5M, 0.7GFLOPs). However, its FLOPs are much higher than those of FAT-B0, B1 (7.8M, 1.2GFLOPs), and close to B2 (13.5M, 2.0GFLOPs). Therefore, considering both the Params and FLOPs, the most reasonable choice is to compare ParC-Net-S with FAT-B1 (5.0M, 1.7GFLOPs v.s. 7.8M, 1.2GFLOPs). So we think we haven't cherry-picked the data. But for a more rigorous comparison, we group the models strictly based on the number of parameters. The updated table can be found in the global response, and we invite the reviewer to read it. It is important to note that due to the high computational complexity and critical parameter count of ParC-Net-S, we still compare it with FAT-B1. The results in global response also show the superiority of FAT. **Q2:** Comparison with mobilevit on efficiency. **R2:** Thanks. The results of Table 1 in our paper indicate that our FATs have higher accuracy and inference throughput than the recent mobile architecture, such as MobileViT, EdgeViT, EdgeNext, and more. We invite the reviewer to read the **Q5** for more comparison. Compared to mobilevit, FAT has the advantage in efficiency (FLOPs, inference throughput, and inference latency) and accuracy. **Q3:** Whether the bidirectional interaction is useful. **R3:** Thanks. Here we ablate the local-to-global and global-to-local, respectively, to show the effect of bidirectional interaction. The results are shown in the below table. It can be seen that the bidirectional interaction plays an important role in FAT. |Model|Params(M)|FLOPs(G)|Top1-acc(%)| |-|-|-|-| |no interaction|4.5|0.72|76.2| |w/o global to local|4.5|0.72|76.9| |w/o local to global|4.5|0.72|76.8| |bidirectional interaction|4.5|0.72|77.6| **Q4:** How FAT can scale up. **R4:** Thanks. We scale up our FAT to common backbone sizes (29M, 4.4GFLOPs, 52M, 9.1GFLOPs, and 88M, 15.1GFLOPs). The results are shown below. We use the framework of Mask-RCNN with 1x schedule to conduct the object detection and instance segmentation. It can be seen that FAT has a good ability to scale up. In addition, our model also demonstrates significant advantages in downstream tasks. |Model|Params(M)|FLOPs(G)|Top1-acc(%)|$AP^b$|$AP^m$| |-|-|-|-|-|-| |Swin-T|29|4.5|81.3|42.2|39.1| |UniFormer-S|24|4.2|82.9|45.6|41.6| |MOAT-0 [1]|28|5.7|83.3|--|--| |CMT-S [2]|25|4.0|83.5|44.6|40.7| |FAT-B3|29|4.4|**83.6**|**47.6**|**43.1**| ||||||| ||||||| |Swin-S|50|8.7|83.0|44.8|40.9| |Focal-S|51|9.1|83.5|47.4|42.8| |UniFormer-B|50|8.3|83.9|47.4|43.1| |MOAT-1 [1]|42|9.1|84.2|-|-| |CMT-B [2]|46|9.3|84.5|-|-| |FAT-B4|52|9.3|**84.8**|**49.7**|**44.8**| ||||||| ||||||| |CSwin-B|78 |15.0|84.2|-|-| |MOAT-2 [1]|73|17.2|84.7|-|-| |CMT-L [2]|75|19.5|84.8|-|-| |iFormer-L |87|14.0|84.8|-|-| |FAT-B5|88|15.1|**85.2**|-|-| **Q5:** Details comparison with other methods (mobilevit, mobileone, Parc-net, SegNext...). **R5:** Thanks, for lightweight backbones, we compare their **inference latency** on GPU/CPU and **inference throughput** on GPU, CPU is Intel i9Core, GPU is V100. **Inference throughput** is measured with batch size 64. **Inference latency** is measured with batch size 1. It can be seen that our FAT has the best speed/performance trade-off among lightweight Transformers. |Model|Params(M)|FLOPs(G)$\downarrow$|CPU(ms)$\downarrow$|GPU(ms)$\downarrow$|Trp(imgs/s)$\uparrow$|Top1-acc(%)| |-|-|-|-|-|-|-| |EdgeViT-XXS|4.1|0.6|43.0|14.2|1926|74.4| |MobileViT-XS|2.3|1.1|100.2|15.6|1367|74.8| |EdgeNext-XS|2.3|0.5|52.3|13.8|1417|75.0| |tiny-MOAT-0|3.4|0.8|61.1|14.7|1908|75.5| |MobileOne-S1 [3]|4.8|0.8|38.1|5.8|3265|75.9| |EfficientNet-B0|5.3|0.4|42.4|14.5|2181|77.1| |FAT-B0|4.5|0.7|44.3|14.4|1932|**77.6**| |||||||| |||||||| |MobileOne-S2 [3]|7.8|1.3|56.8|6.9|2312|77.4| |EdgeViT-XS|6.7|1.1|62.7|15.7|1528|77.5| |tiny-MOAT-1|5.1|1.2|80.8|14.8|1506|78.3| |MobileViT-S|5.6|2.0|135.6|16.2|898|78.4| |ParC-Net-S|5.0|1.7|112.1|15.8|1321|78.6| |EfficientNet-B1|7.8|0.7|61.3|18.5|1395|79.2| |EdgeNext-S|5.6|1.3|86.4|14.2|1243|79.4| |FAT-B1|7.8|1.2|62.6|14.5|1452|**80.1**| |||||||| |||||||| |MobileOne-S4 [3]|14.8|3.0|110.2|9.8|1225|79.4| |ParC-ResNet50|23.7|4.0|160.0|16.6|1039|79.6| |EdgeViT-S|11.1|1.9|95.8|16.2|1049|81.0| |tiny-MOAT-2|9.8|2.3|122.1|15.4|1047|81.0| |EfficientNet-B3|12.0|1.8|124.2|25.4|624|81.6| |FAT-B2|13.5|2.0|93.4|14.6|1064|**81.9**| As for SegNext, we compare it with our general backbone. It can be seen that FAT is better than SegNext. |Model|Params(M)|FLOPs(G)|Top1-acc(%)| |-|-|-|-| |SegNext-T [4]|4.2|0.7|75.9| |FAT-B0|4.5|0.7|**77.6**| |SegNext-S [4]|14.0|2.2|81.2| |FAT-B2|13.5|2.0|**81.9**| |SegNext-B [4]|27|4.5|83.0| |FAT-B3|29|4.4|**83.6**| |SegNext-L [4]|45|8.2|83.9| |FAT-B4|52|9.3|**84.8**| **Reference** [1]Chenglin Yang, et al. Moat: alternating mobile convolution and attention brings strong vision models. ICLR, 2023. [2]Jianyuan Guo, et al. Cmt: Convolutional neural networks meet vision transformers. CVPR, 2022. [3]Pavan. K, et al. MobileOne: An Improved One millisecond Mobile Backbone. arXiv:2206.04040, 2022. [4]Meng-Hao Guo, et al. SegNeXt: Rethinking Convolutional Attention Design for Semantic Segmentation, NeurIPS, 2022. --- Rebuttal Comment 1.1: Title: feedback Comment: Thanks for all the experiments. The additional results make me more convinced. --- Reply to Comment 1.1.1: Title: Thanks for the reviewer DKrb's comments Comment: Dear reviewer DKrb, Thank you for your recognition of our work. We sincerely appreciate the time and effort you have put into reviewing our paper. Your feedback is crucial for improving our work. Best regards, The authors
Summary: This paper proposes a new family of light-weight vision transformers named FAT, which enhances the local and global feature fusion by a bi-directional interaction between them. Experiments on image classification, object detection, and semantic segmentation are conducted. Strengths: 1. The paper provides a good practice in designing high performance efficient ViTs, which could be a valuable reference for the future work. 2. The results on several benchmark datasets outperform all the compared models. Weaknesses: 1. An incremental work. There exists tons of works that focus on performing local and global features in ViTs (see Sec. 2 of the paper), this paper only proposes a simple way to enhance the feature fusion. Besides, other increments are actually a combination of existing successful practices. For example, the convolutional and non-overlapping downsample module is from PVTv2, the downsample in MHSA have also been widely used (e.g., PVT), CPE is from [1]. 2. Actually, a CVPR 2022 paper MixFormer [2] also proposes a bi-directional interaction of convolutional and self-attention branches. What's the pure difference between this method and MixFormer? 3. It is difficult for me to evaluate the efficacy of the proposed bi-directional interactions. The paper only compares the method with simple fusion baselines, while there exist many global-local feature fusion methods, I cannot figure out which one is better according to Table 1, since many increments are added in this method. I suggest the authors to replace the fusion method in FAT with other popular methods such as the interactions in MixFormer and message token in MSG-Transformer [3], for fair comparisons. --- **References** [1] Xiangxiang Chu, Zhi Tian, Bo Zhang, Xinlong Wang, and Chunhua Shen. Conditional positional encodings for vision transformers. In ICLR, 2023. [2] Chen, Q., Wu, Q., Wang, J., Hu, Q., Hu, T., Ding, E., Cheng, J. and Wang, J., 2022. Mixformer: Mixing features across windows and dimensions. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 5249-5259). [3] Fang, J., Xie, L., Wang, X., Zhang, X., Liu, W. and Tian, Q., 2022. Msg-transformer: Exchanging local spatial information by manipulating messenger tokens. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 12063-12072). Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **On Novelty, Difference with MixFormer and Efficacy of Bi-directional interaction** We thank the reviewer for recognizing the positive aspects of our paper, and we will address the reviewer's concerns in the following parts. **Q1: Novelty.** **R1:** Thanks. We want to highlight that the key novelty is that we construct a novel module for local-global fusion, which is simple to implement and has negligible costs. Our bidirectional interaction is inspired by the existing research on the human brain [9, 10]. In contrast to all previous feature fusion methods, our bidirectional adaptive interaction stands out in two significant ways: 1. Bidirectional Property: We model a bidirectional process between the two before fusing local and global features. This results in strengthened local and global features, respectively. In contrast to commonly used linear fusion, our interaction returns two enhanced vectors representing local and global contexts rather than an inseparable vector mixed with both. 2. Adaptive Capability: Taking local features as an example, the global weights we use to enhance them are generated by tokens containing global features, making them context-aware and capable of adapting to the input data. Moreover, we calculate their element-wise product directly when fusing the enhanced local and global features. Our context-aware fusion approach is closer to the attention mechanism than linear fusion. **Q2: Difference with MixFormer [1].** **R2:** Thanks. We believe that the key differences between our FAT and MixFormer can be summarized in the following three points: 1. Different motivations: In FAT, the proposed FASA aims to establish channels for information flow from local to global and global to local, allowing local features to perceive global information and refine global features based on local features. On the other hand, MixFormer is motivated by establishing channels for information flow between an image's spatial and channel dimensions. Information flows in MixFormer enable complementary interaction between window attention (with shared parameters in the channel dimension and weak channel representation) and DWConv (with shared parameters in the spatial dimension and weak spatial representation). 2. Different branches: Both branches of MixFormer perceive local features, while in FASA, we use a branch for local perception and another for global. Additionally, the limitation in MixFormer [1]'s paper mentions that replacing the window attention branch with global attention would lead to a performance decline, contrary to our model's approach, as shown in the table below. | Model| Params(M)| FLOPs(G)|Top1-acc(%)| |--|--|--|--| |Conv + Global attention|4.5|0.72|77.6| |Conv + Window attention|4.4|0.72|75.8| 3. Different means of fusing information: Compared to FAT, MixFormer introduces more parameters and FLOPs for better representations in the processes of spatial and channel interaction. Comparison can be found in **Q3**. It can be found that FASA gets better performance (+1.1%) with fewer FLOPs (-0.25G )and Params (-1.7M). **Q3: Efficacy of bi-directional interaction** **R3:** Thanks. We compare our FASA with eight baselines, including the module with local-global fusion (MSG in MSG-Transformer [2], Mix Block in MixFormer [1], R2L in RegionViT [3]) and other self-attention mechanisms with good performance (CSWSA in CSwin-Transformer [4], Max-SA in MaxViT [5], WSA/S-WSA in Swin-Transformer [6], SRA in PVT [7] and LSA/GSA in Twins-SVT [8]). In all baselines, we use the same architecture but only modify the token mixer. The results in the below table can demonstrate the superiority of FASA. | Model|Params(M)|FLOPs(G)|Top1-acc(%)| |--|--|--|--| | MSG [2]|4.4|0.77|76.3| | Mix Block [1]|6.2|0.97|76.5| | R2L [3]|4.8|0.76|76.2| | CSWSA [4]|4.3|0.78|76.8| | Max-SA [5]|4.3|0.77|75.7| | WSA/S-WSA [6]|4.3|0.76|76.1| | SRA [7]|4.4|0.71|76.2| | LSA/GSA [8]|4.3|0.71|76.6| | FASA|4.5|0.72|**77.6**| **Reference** [1] Chen, Q., Wu, Q., Wang, J., Hu, Q., Hu, T., Ding, E., Cheng, J. and Wang, J. Mixformer: Mixing features across windows and dimensions. In CVPR, 2022. [2Jiemin Fang , Lingxi Xie, Xinggang Wang .et, al. MSG-Transformer: Exchanging Local Spatial Information by Manipulating Messenger Tokens. In CVPR, 2022. [3]Chun-Fu (Richard) Chen, Rameswar Panda, and Quanfu Fan. RegionViT: Regional-to-Local Attention for Vision Transformers. In ICLR, 2022. [4]Xiaoyi Dong, Jianmin Bao, Dongdong Chen, et al. Cswin transformer: A general vision transformer backbone with cross-shaped windows. In CVPR, 2022. [5]Zhengzhong Tu, Hossein Talebi, Han Zhang, et al. Maxvit: Multi-axis vision transformer. In *ECCV*, 2022. [6]Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In *ICCV*, 2021. [7]Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. arXiv preprint arXiv:2103.15808, 2021. [8]Xiangxiang Chu, Zhi Tian, Yuqing Wang, Bo Zhang, Haibing Ren, Xiaolin Wei, Huaxia Xia, and Chunhua Shen. Twins: Revisiting the design of spatial attention in vision transformers. In *NeurIPS*, 2021. [9]Schwarzkopf D. Samuel and Rees Geraint. 2011 Interpreting local visual features as a global shape requires awareness Proc.R. Soc. B.278: 2207–2215. doi/10.1098/rspb.2010.1909 [10]Mannion DJ, Kersten DJ, Olman CA. Regions of mid-level human visual cortex sensitive to the global coherence of local image patches. J Cogn Neurosci. 2014 Aug;26(8):1764-74. doi: 10.1162/jocn_a_00588. Epub 2014 Feb 24. PMID: 24564470; PMCID: PMC4074231. --- Rebuttal Comment 1.1: Comment: I appreciate the detailed responses of the authors, and most of my concerns are addressed. After reading the responses and other reviewers' reviews, I am delighted to raise my evaluation to "Accept" and recommend an acceptance of this paper. --- Reply to Comment 1.1.1: Title: Thanks for the reviewer c9kv's comments Comment: Dear reviewer c9kv, We are grateful for the reviewer's high evaluation of our work. The suggestions and comments from the reviewer have been incredibly helpful in enhancing our work. Again, We would like to thank the reviewer for investing their time and effort into reviewing our article. Best regards, The authors
Rebuttal 1: Rebuttal: The global response contains two things: 1. Updated version of Table 1 in the paper. 2. Plots to show the training stability. Pdf: /pdf/e18276811ccb6398f7a0a33d4c3dee8a2ec667c3.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Zeroth-Order Methods for Nondifferentiable, Nonconvex, and Hierarchical Federated Optimization
Accept (poster)
Summary: The authors tackle the problem of nondifferentiable nonconvex locally constrained federated learning (Eq. $FL_{nn}$), and the bilevel variant (Eq. $FL_{bl}$). The minimax settings (Eq. $FL_{mm}$) is a special case of the latter. The authors provide error, iteration and communication complexity for these settings where both non-convexity and non-differentiability is assumed. Strengths: The reasoning for obtaining the algorithm is clearly explained in Sec. 1. The experiment section is relevant and provides a clear comparison with other methods. Weaknesses: Contrary to the reasoning for obtaining the algorithm, the results of Thm. 1 and 2 are not discussed. Doing so would help understand their significance. Typically, do lower bounds exist in some specific settings? What are the interpretation of the different terms? What are the dominant ones? Of lower importance, the reading is a bit complicated: the content is dense, with lots of technicalities and equation in the main text. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Why is the final loss so big in Fig. 2? **Clarity.** * A reader not acquainted with Moreau smoothing cannot understand Eq. (1) without further research. Defining $\mathbb B$ and $I_X^\eta$ is advised (or at least point to `l. 158`). * Likewise, a reference for Eq. (4) is advised (or at least point to `l. 147`). * Lemma 1 in `l. 161`, Prop. 1 at `l. 171`, Thm. 1 at `l. 182`, Thm. 2 at `l. 209`: refer to the appendix' location of the proof. **Presentation.** * Tables 1 and 2 are useful summaries of the result. The font is however smaller than the rest of the text. * Framed environments `l. 48` and `l. 90` have italic or bold titles. Choose one of them? * It is a matter of taste, but I would emphasize the text of Definition and Proposition environments so that they stand out of the main text. * Fig. 1 and 2 require bigger legends (or moved to the caption) * `l. 270`: `\exp` instead of `exp` Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Discussing the significance of the results of Thm. 1 and 2 would be helpful. The comparison with other FL scheme in Table 2 is done with different metric of accuracy. Relating those metrics would provide insight on the theoretical benefits of Alg. 1, and would complements the finding of the experiment section. Notably, highlighting the role of $\eta$ in Table 2 and in Thm 1 and 2 would be welcomed too, as it is an important hyperparameter (complementary to the small remark `l. 263-265`). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your valuable suggestions and detailed comments on improving this work. Below, please see our response to each comment. $\textbf{Your comment:}$ Contrary to the reasoning for obtaining the algorithm, the results of Thm. 1 and 2 are not discussed. Doing so would help understand their significance. Typically, do lower bounds exist in some specific settings? What are the interpretation of the different terms? What are the dominant ones? $\textbf{Our response:}$ Your point is well-taken and we do agree that a remark on Theorem 1 would help with this clarification. Indeed, the communication complexity in Theorem 1, in terms of dependence on the number of clients denoted by $m$ and the parameter $\epsilon$, matches with the existing complexities for addressing smooth nonconvex FL. Importantly, this implies that the use of randomized smoothing does not lead to a degradation of the main complexity bounds in terms of the smoothed problem. Please note that we mention the utility of Theorem 2 in Remark 1. We would like to note that our major contribution in this work lies in the design and analysis of Algorithm 2 in addressing hierarchical FL and so, we attempted to emphasize on this matter in Remark 1. $\textbf{Your comment:}$ Of lower importance, the reading is a bit complicated: the content is dense, with lots of technicalities and equation in the main text. $\textbf{Our response:}$ Thank you for raising this issue. We will add more intuition to the technical details to ease the reading. $\textbf{Your comment:}$ Why is the final loss so big in Fig. 2? $\textbf{Our response:}$ We believe this is mainly because of the starting point we choose, we use vectors of all ones. $\textbf{Your comment:}$ A reader not acquainted with Moreau smoothing cannot understand Eq. (1) without further research. Defining $\mathbb B$ and $I_X^\eta$ is advised (or at least point to l. 158). Likewise, a reference for Eq. (4) is advised (or at least point to l. 147). Lemma 1 in l. 161, Prop. 1 at l. 171, Thm. 1 at l. 182, Thm. 2 at l. 209: refer to the appendix' location of the proof. $\textbf{Our response:}$ Thank you for your suggestions on making the paper more reader-friendly. We will address these by adding some intuitive explanation and point the location of our import lemmas and propositions. $\textbf{Your comment:}$ Tables 1 and 2 are useful summaries of the result. The font is however smaller than the rest of the text. Framed environments l. 48 and l. 90 have italic or bold titles. Choose one of them? It is a matter of taste, but I would emphasize the text of Definition and Proposition environments so that they stand out of the main text. Fig. 1 and 2 require bigger legends (or moved to the caption). l. 270: $\exp$ instead of exp. $\textbf{Our response:}$ We agree with your helpful suggestions on the presentation of this paper. We will address these to make the important parts of this paper more clear and consistent. $\textbf{Your comment:}$ Discussing the significance of the results of Thm. 1 and 2 would be helpful. $\textbf{Our response:}$ Thank you for your suggestion. We may provide the following note to clarify the significance of the results in our paper and how they benefit the research on FL. This work appears to be the first paper that provides an FL method with complexity guarantees for solving bilevel optimization problems where the $\textbf{lower-level problem may be constrained.}$ There have been several recent works that have highlighted the challenges in solving this type of hierarchical problems (i.e., Stackelberg games), even in centralized settings. For example, consider $\min_{x \in [-1,1]}\ \max_{y \in [-1,1],\ x+y\leq 0}\ x^2+y$. The solution is $(x^*,y^*)=(0.5,-0.5)$. The same problem, but with a reversed order of min and max, $\max_{y \in [-1,1]}\ \min_{x \in [-1,1], \ x+y\leq 0}\ x^2+y$, has the solution $(x^*,y^*)=(-1,1)$. One of the major contributions in this work is that Algorithm 2 can be employed to address this challenging class of problems complicated with the need for federated learning in both levels. To provide more details on this, see Remark 1 and Table 1 where we have provided explicit communication complexity results for addressing bilevel FL problems under the use of suitable FL methods for the lower level problem with different assumptions. These are only a few instances of the breadth of FL problems that we can provably address using Algorithm 2. $\textbf{Your comment:}$ The comparison with other FL scheme in Table 2 is done with different metric of accuracy. Relating those metrics would provide insight on the theoretical benefits of Alg. 1, and would complements the finding of the experiment section. Notably, highlighting the role of $\eta$ in Table 2 and in Thm 1 and 2 would be welcomed too, as it is an important hyperparameter (complementary to the small remark l. 263-265). $\textbf{Our response:}$ Thank you for this important observation. We would like to note that naturally, in view of Lemma 1(iv), the dependence on $\eta$ would be similar to the dependence on inverse of the Lipschitzian parameter in smooth nonconvex FL. However, we can add the dependence on $\eta$ in the convergence rate in Thm. 1 and 2, as the random variable $v$ for zeroth-order method depends on it. As we have shown explicitly in Thm. 1 and 2, the rate depends on $\eta$ and we also validated this in our experiment in sec. 5.1. --- Rebuttal Comment 1.1: Comment: Thank to the authors for answering my questions. I maintain my score. --- Reply to Comment 1.1.1: Title: Response to Reviewer U9HM Comment: We are very grateful to the referee for taking the time in reviewing our work and providing us with detailed and constructive feedback.
Summary: The paper considers the federated optimization problem with the non-smooth non-convex target function. The authors use a zero-order oracle, which allows to approximate the gradient. This approximation is related not only to the original target function, but also to its smoothed version (an additional theoretical object obtained using the random smoothing technique). For a new smooth target function, there are methods in the literature for solving it, and this is what the authors use. Mini-max problems and bilevel optimization are also considered. Experiments are given. Strengths: For me, the article is nicely written and easy to follow. Weaknesses: I ask the authors to pay attention to this work: Gasnikov, A., Novitskii, A., Novitskii, V., Abdukhakimov, F., Kamzolov, D., Beznosikov, A., ... & Gu, B. (2022). The power of first-order smooth optimization for black-box non-smooth problems. arXiv preprint arXiv:2201.12289. It gives a more or less unified scheme of how by taking any practical first-order stochastic method for smooth problems to obtain a gradient-free method for non-smooth problems. It seems that using this scheme one can obtain the results of this paper as well. Moreover, this scheme is not new; it has been used for 10 years in most papers on gradient-free optimization. In particular, the facts that are proven in the article under review are also used in the article I cited and there are links from where they are taken. Please also note the refs within the article I cited. For example, there are references to works with gradient-free methods of solving saddle point problems. Finally, the contribution of this paper looks rather technical and at the moment I am not ready to accept the work. But perhaps the authors will change my mind in the process of rebuttal. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: No Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The paper is theoretical, therefore there is no need to discuss the social negative impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for bringing this paper to our attention. We will include this work in our references and comment on it in section 2. With respect, this paper has significant differences in scope and treatment with our work. We clarify these in the following. (i) First, note that this paper studies zeroth-order schemes for $\textbf{nonsmooth convex}$ optimization problems, while we study $\textbf{nonsmooth nonconvex}$ federated stochastic optimization problems. The progression from convex nonsmooth to nonconvex nonsmooth leads to significant challenges and necessitates the introduction of Clarke-stationarity. As we have noted in lines 144--147, our work in designing Algorithm 1 is motivated by recent findings where it is shown that for a subclass of nonsmooth nonconvex functions, computing an $\epsilon$-stationary point is impossible in finite time. In Algorithm 1, we employ randomized smoothing to develop a zeroth-order FL method and then, in Proposition 1, we leverage a relationship between stationarity with respect to the smoothed problem and $\eta$-Clarke stationarity. The result in Proposition 1 is complemented in Theorem 1 where we provide explicit performance guarantees for computing a stationary point to the smoothed problem. To our knowledge, these guarantees did not exist for nondifferentiable nonconvex FL. (ii) Second, the paper you mentioned does not address hierarchical optimization problems. However, one of the key contributions in our work is the design of Algorithm 2 and its complexity analysis in Theorem 2 for addressing hierarchical problems in FL. To highlight our contributions, consider the (centralized) bilevel minimization of $f(x,y(x))$ with respect to $x$ where $y(x) \in \hbox{arg}\min_{y \in \mathcal{Y}(x)}\ h(x,y)$. First, note that even when $f$ is smooth and convex in $(x,y)$, the implicit function $f(\bullet,y(\bullet))$ is often nondifferentiable nonconvex in $x$. Second, the analytical form of $y(\bullet)$ is unavailable in most ML applications, such as hyperparameter ML. As such, $\textbf{the zeroth-order information of the implicit function $f(\bullet,y(\bullet))$ is not available.}$ In view of this challenge, it is not clear how one can develop a provably convergent zeroth-order method for computing a stationary point to the nonsmooth nonconvex and hierarchical problem $\min_{x}\ f(x,y(x))$. A naive idea is to inexactly compute $y(x)$. However, an inexact computation of $y(x)$ leads to a bias in the zeroth-order information of $f$ and consequently, a bias in the approximation of its zeroth-order gradient. This $\textbf{bias further propagates}$ throughout the implementation (please see Theorem 2(i) where we manage to derive the aggregated bias as $\sum_{r=0}^R{\varepsilon_r}$). Our work precisely addresses this challenge in Theorem 2. Importantly the bound in Theorem 2(i) implies that even when we inexactly compute $y(x)$ using a standard FL scheme, we are able to derive complexity guarantees for solving bilevel FL problems. Please note that a major technical challenge we faced in designing Algorithm 2 is that $\textbf{inexact evaluations of $y(x)$ must be avoided during the local steps.}$ This is because we consider bilevel problems where both the levels are distributed. Because of this, the inexact evaluation of $y(x)$ by each client in the local step in the upper level would require significant communications and is counter-intuitive to the nature of the FL framework. We carefully address this challenge by introducing delayed inexact computation of $y(x)$. Please see step 8 in Algorithm 2 and note how $y_{\varepsilon}$ is evaluated at $\hat x_r +v_{T_r}$ that is a different vector than the vector used by the client, i.e., $x_{i,k} +v_{T_r}$. At each communication round in the upper level, we only compute $y(x)$ inexactly twice in the global step and then use this $\textbf{delayed information}$ in the local steps. $\textbf{This delayed inexact computation of $y$ renders some technical challenges}$ in the proofs and so, we hope that the design and analysis of Algorithm 2 are not viewed as simple extensions of existing zeroth-order methods. Lastly, we thank you for considering to reevaluate our work. Although our work may seem technical, we hope to clarify on the significance of the results in our paper and how they benefit the research on FL. Our work appears to be the first paper that provides an FL method with complexity guarantees for solving bilevel optimization problems where the $\textbf{lower-level problem may be constrained.}$ There have been several recent works that have highlighted the challenges in solving this type of hierarchical problems (i.e., Stackelberg games), even in centralized settings. For example, consider $\min_{x \in [-1,1]}\ \max_{y \in [-1,1],\ x+y\leq 0}\ x^2+y$. The solution is $(x^*,y^*)=(0.5,-0.5)$. The same problem, but with a reversed order of min and max, $\max_{y \in [-1,1]}\ \min_{x \in [-1,1], \ x+y\leq 0}\ x^2+y$, has the solution $(x^*,y^*)=(-1,1)$. While we are unaware of zeroth-order methods for solving such problems even in centralized settings, one of the major contributions in our work is that Algorithm 2 can be employed to address this challenging class of problems complicated with the need for federated learning in both levels. Indeed, Algorithm 2 only requires solving the inner level problem for a fixed $x$ inexactly. In the case where the inner level problem is constrained, as it is the case in the above example as well as adversarial problems in FL, we may use a suitable method for solving the lower level problem for a fixed value of $x$ (see Algorithm 2, step 4). To provide more details on this, please see Remark 1 and Table 1 in our paper. We have provided explicit communication complexity results for addressing bilevel FL problems under use of different FL methods for the lower level problem. These are only a few instances of the breadth of FL problems that we can provably address using Algorithm 2. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the response! (i) I agree that the non-convex case is more complicated, but the scheme for analyzing it is also a unified recipe. Yes, there is no such paper in the literature that clearly describes this scheme. But this scheme has already appeared in other papers in the non-distributed case (in particular, Clarke-stationarity etc). (ii) I agree with the authors here and apologize for not noticing this earlier. It seemed to me to be a part that is not emphasized much (including in the experiments - much less space is devoted to it). Despite the fact that the contribution here looks a bit technical, but for me it is sufficient. I thank the authors again for the rebuttal, and raise my score. --- Reply to Comment 1.1.1: Title: Response to Reviewer LNaU Comment: We thank the referee for taking the time in reviewing our response in detail. We are truly grateful for the reevaluation of our work. Please see our responses in the following. (i) With regard to Algorithm 1 and Theorem 1, the referee is making a fair point and we do agree that the randomized smoothing technique has been employed in development of zeroth-order schemes for standard (single-level, non-FL, and mostly differentiable) optimization problems. In Section 3 in lines 154 to160, we have commented on some of the key references related to the zeroth-order and smoothing methods. To address your concern and clarify further, we will add a note in Section 1.2 where we summarize the main contributions and emphasize that the major contribution of our work lies in devising Algorithm 2 and its analysis in Theorem 2 in addressing hierarchical problems in FL. (ii) Thank you for this feedback. This is also a valid point. We have three numerical experiments. The first one (Section 5.1) is to validate Algorithm 1 and the other two experiments (Sections 5.2 and 5.3) are to validate Algorithm 2 in addressing a bilevel FL and a minimax FL problem. To address your concern, we will revise the numerical experiments section, reallocate the space between Sections 5.1, 5.2, and 5.3 to provide more details about the two implementations for bilevel and minimax FL problems. This will be done by moving a portion of the discussion in Section 5.1 to the Supplementary Material document. We thank the referee once again. Please let us know if you have any further questions or comments.
Summary: **Summary:** The paper develops zero-th order methods for solving non-smooth and non-convex federated problems. In addition, the paper also develops zero-th order methods for solving bilevel and minimax optimization problems in a federated setting. The authors develop a randomized smoothing-based approach and present convergence guarantees for the proposed algorithms. Experiments on training ReLU neural networks and hyperparameter optimization tasks are presented to evaluate the performance of the proposed algorithms. Strengths: **Strengths:** The paper considers an important problem class of solving non-convex and non-smooth problems with zero-th order oracle. The paper is very well written with ideas explained clearly. Weaknesses: **Weaknesses:** Here, I list some points the authors should consider addressing: 1. There is some existing literature on federated learning where randomized smoothing-based zero-th order methods have already been developed. The authors have not mentioned/or compared their approach against such methods Please see [R1], [R2] (strongly convex) below. [R1] Fang et al., Communication-Efficient Stochastic Zeroth-Order Optimization for Federated Learning [R2] Li et al., Communication-efficient decentralized zeroth-order method on heterogeneous data 2. The authors should make dependence on the problem dimension explicit in the tables and the communication complexities listed in all the theorems. The current presentation gives an indication to the reader that zero-th order methods can achieve the same guarantees as first-order methods which is certainly not true. 3. The authors should consider including some discussion on the bounded set dissimilarity since the assumption is relatively new in the context of FL. 4. Is the notion of Clarke stationarity point related to Goldstein stationary point (or are the two notions the same)? Also, the discussion on Clarke's generalized gradient of the original problem compared to the gradient of the smoothed objective appears before the definition of Clarke's stationarity point. Please consider moving the definition before the discussion. 5. In Line 172 and Prop. 1 please characterize sufficiently small $\eta$. 6. Theorem 2 ignores the additional communication complexity because of solving the lower-level problem to $\epsilon_r$ accuracy. The authors should report the complete communication complexity incurred by the algorithm for solving the bilevel optimization problem not only the upper-level complexity. 7. Finally, the experiments considered to evaluate Algorithm 1 are weak. Specifically, the authors have considered a very low-dimensional problem. It would be advisable to include training on Cifar-10/MINST datasets with practically sized neural networks to evaluate the actual performance of the proposed scheme. Moreover, for hyperparameter learning there are many baseline bilevel algorithms including BSA, TTSA, ALSET, SUSTAIN, stocBiO, etc. whose performance should be compared with the proposed scheme. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Please see the weakness section above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review. We address the comments point by point as follows. $\textbf{Response to weakness 1:}$ Note that [R1] considers smooth, but possibly nonconvex, problems while [R2] considers smooth and strongly convex problems. Several distinctions persist. (I) While some FL methods use a zeroth-order framework, they require differentiability or $L$-smoothness of the objective function in the convergence theory. (II) We do agree that these schemes leverage a similar smoothing technique, however, they don't address bilevel problems. One of our key contributions is the design of Algorithm 2 and its complexity analysis in Theorem 2. To clarify, consider the (centralized) bilevel minimization of $f(x,y(x))$ with respect to $x$ where $y(x) \in \hbox{arg}\min_{y \in \mathcal{Y}(x)}\ h(x,y)$. Even when $f$ is smooth and convex in $(x,y)$, the implicit function $f(\bullet,y(\bullet))$ is often nondifferentiable nonconvex in $x$. Also, the analytical form of $y(\bullet)$ is unavailable in most ML applications. As such, $\textbf{the zeroth-order information of the implicit function $f(\bullet,y(\bullet))$ is not available}$ and it is not clear how one can develop a provably convergent zeroth-order method for computing a stationary point to the nonsmooth nonconvex problem $\min_{x}\ f(x,y(x))$. A naive idea is to inexactly compute $y(x)$. However, this leads to a bias in the approximation of the zeroth-order gradient. This $\textbf{bias further propagates}$ throughout the implementation (see Theorem 2(i) where we manage to derive the aggregated bias as $\sum_{r=0}^R{\varepsilon_r}$). Importantly the bound in Theorem 2(i) implies that even when we inexactly compute $y(x)$ using a standard FL scheme, we can derive complexity guarantees for bilevel FL. A major technical challenge we faced in designing Algorithm 2 is that $\textbf{inexact evaluations of $y(x)$ must be avoided during the local steps.}$ This is because we consider bilevel problems where both the levels are distributed. So, the inexact evaluation of $y(x)$ by each client in the local step in the upper level would require significant communications and is counter-intuitive to the nature of the FL framework. We carefully address this challenge by introducing delayed inexact computation of $y(x)$. See step 8 in Algorithm 2 and note how $y_{\varepsilon}$ is evaluated at $\hat x_r +v_{T_r}$ that is different than $x_{i,k} +v_{T_r}$. $\textbf{This delayed inexact computation of $y$ renders some technical challenges}$ in the proofs and so, we hope that the design and analysis of Algorithm 2 are not viewed as simple extensions of existing zeroth-order methods. $\textbf{Response to weakness 2:}$ Your point on the dependence on $n$ is well-taken. We can also present the dependence on the smoothing parameter $\eta$ in the complexity results. We will be happy to make this change if the opportunity is provided. However, we would like to emphasize that our major contribution in this work lies in addressing bilevel FL problems in absence of the analytical form of the lower level solution. This problem, even in very low dimensions, has remained challenging in FL. $\textbf{Response to weakness 3:}$ Regarding eq (3), as we have noted in the paper, this is an instance of the bounded gradient dissimilarity in SCAFFOLD. Indeed, when the bounded gradient dissimilarity assumption in SCAFFOLD is written for the local functions $0.5\|x-\mathcal{P}_{X_i}(x)\|^2$ we reach to eq. (3). Further, this assumption holds when for example the generated iterate by the algorithm remains bounded. We imposed this assumption to weaken the boundedness assumption. $\textbf{Response to weakness 4:}$ Thank you. Goldstein also refers to Clarke stationary points. His contribution lies in showing that if $x$ is stationary with respect to the smoothed problem, then $x$ is $\eta$-Clarke stationary with respect to the original problem. $\textbf{Response to weakness 5:}$ Thank you for pointing this out. Our proof for this result has been omitted. In the revised version of our manuscript, we will be happy to make this change and provide the detailed proof. $\textbf{Response to weakness 6:}$ We actually did provide the explicit total communication complexity in Table 1 when three different lower-level algorithms are employed. The proof can be found in Appendix C. Also, in Thm. 2, we provide upper-level communication complexity because it is the algorithm for upper-level. Note that in Algorithm 2 line 4, "FedAvg" denotes the suitable federated averaging method for solving the lower level problem. $\textbf{Response to weakness 7:}$ We thank the reviewer's suggestion on strengthening our experimental part. We will apply our methods on datasets with higher dimension. These algorithms are not federated and whether they can be applied to nondifferentiable nonconvex setting is not known. Indeed, our work appears to be the first paper that provides an FL method with complexity guarantees for solving bilevel optimization problems where the $\textbf{lower-level problem may be constrained.}$ There have been several recent works that have highlighted the challenges in solving this type of hierarchical problems (i.e., Stackelberg games), even in centralized settings. For example, consider $\min_{x \in [-1,1]}\ \max_{y \in [-1,1],\ x+y\leq 0}\ x^2+y$. The solution is $(x^*,y^*)=(0.5,-0.5)$. The same problem, but with a reversed order of min and max, $\max_{y \in [-1,1]}\ \min_{x \in [-1,1], \ x+y\leq 0}\ x^2+y$, has the solution $(x^*,y^*)=(-1,1)$. One of our major contributions is that Algorithm 2 can be employed to address this challenging class of problems in FL. To elaborate, please see Remark 1 and Table 1 where we provide explicit communication complexity results for addressing bilevel FL under use of different FL methods for the lower level problem. These are only a few instances of the breadth of FL problems that we can provably address by Algorithm 2. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: I thank the authors for their detailed response. Most of my concerns have been addressed. A few general comments that the authors may consider addressing. - I agree with the authors that the contribution is not an extension of previous federated zeroth order algorithms, I just wanted to make sure that the authors include the discussions on these papers in the related work. - Regarding the sufficiently small value of $\eta$, please characterize the value in the proposition's statement as well. - I believe the experiments on higher dimensional problems of practical interest will significantly strengthen the paper. --- Reply to Comment 1.1.1: Title: Response to Reviewer CtdY Comment: We thank the referee for taking the time to review our response and providing us with further feedback. Please see our responses to your comments in the following. $\bullet$ Definitely, we will be happy to add the discussions on the references [R1] and [R2] in the related work in the revised paper. $\bullet$ The threshold on $\eta$ is given as $\eta \leq \frac{\delta}{\max (2,nL_0)}$ where $\delta >0$ is an arbitrary scalar to achieve a stationary point w.r.t. the $\delta$-Clarke generalized gradient, $n$ is the dimension, and $L_0$ is the Lipschitz continuity parameter of $f$. To be clear, we provide a revised version of Proposition 1 and a proof sketch as follows. $\textbf{Proposition 1}$ Consider problem (4) and let Assumption 1 hold. (i) For any $\eta>0$, we have $\nabla f^{\eta}(x) \in \partial_{2\eta} f(x)$. (ii) Let $\delta>0$ be given. If $\nabla \mathbf{f}^{\eta}(x) =0$ and $\eta \leq \frac{\delta}{\max (2,nL_0)}$, then $0_n \in \partial_{\delta} \left( f + \mathbb{I}_X \right)(x).$ $\textbf{Proof Sketch}$ (i) The proof for this part follows from Proposition 2.2 in [51]. (ii) From $\nabla \mathbf{f}^\eta(x) =0$, we have $\nabla f^\eta (x)+\frac{1}{\eta}(x-P_X(x)) =0$. This implies that $\|x-P_X(x)\| \leq \eta \|\nabla f^{\eta}(x)\|$. Next, we obtain a bound as follows. $\left\|\nabla f^{\eta}(x) \right\|^2 \leq \tfrac{1}{m}\textstyle\sum_{i=1}^m\left\|\nabla f_i^{\eta}(x) \right\|^2 = \left(\tfrac{n^2}{\eta^2}\right) \tfrac{1}{m}\textstyle\sum_{i=1}^m\left\|\mathbb{E}_{v_i \in \mathbb{\eta S}} [ f_i(x+ v_i) \tfrac{v_i}{\|v_i\|} ]\right\|^2$ $ =\left(\tfrac{n^2}{\eta^2}\right) \tfrac{1}{m}\textstyle\sum_{i=1}^m\left\|\mathbb{E} [( f_i(x+ v)-f_i(x)) \tfrac{v_i}{\|v_i\|} ]\right\|^2 $ $\stackrel{\tiny \mbox{Jensen's ineq.}}{\leq}\left(\tfrac{n^2}{\eta^2}\right) \tfrac{1}{m}\textstyle\sum_{i=1}^m\mathbb{E} [| f_i(x+ v)-f_i(x)) |^2 ] $ $\stackrel{\tiny \mbox{Assumption~1 (i)}}{\leq}\left(\tfrac{n^2}{\eta^2}\right) \tfrac{1}{m}\textstyle\sum_{i=1}^m\mathbb{E} [L_0^2\|v_i\|^2 ] = n^2L_0^2.$ Thus, the infeasibility of $x$ is bounded as $\|x-P_X(x)\| \leq \eta nL_0$. Recall that the $\delta$-Clarke generalized gradient of ${\mathbb{I}_X}$ at $x$ is defined as $$\partial_\delta \mathbb{I}_X(x) \triangleq \text{conv} ( \zeta: \zeta \in \mathcal{N}_X(y), \|x-y\| \leq \delta ),$$ where $\mathcal{N}_X$ denotes the normal cone of $X$. In view of $\|x-P_X(x)\| \leq \eta nL_0$, for $y:=P_X(x) $ and $\eta \leq \frac{\delta}{\max\{2,nL_0\}}$, we have $\|x-y\| \leq \delta$. Next we show that for $\zeta:= \frac{1}{\eta}(x-P_X(x))$ we have $\zeta \in \mathcal{N}_X(y)$. From the projection theorem, we may write $$(x-P_X(x))^T(P_X(x)-z) \geq 0, \quad \hbox{for all } z \in X. $$ This implies that $\zeta^T(y-z) \geq 0$ for all $z \in X$. Note that $y=P_X(x) \in X $. Thus, we have $\zeta \in \mathcal{N}_X (y) $ which implies that $\frac{1}{\eta}(x-P_X(x)) \in \partial_\delta \mathbb{I}(x)$. From (i) and that $2\eta \leq \delta$, we have $\nabla f^{\eta}(x) \in \partial_{\delta} f(x)$. Adding the preceding relations and invoking $\nabla \mathbf{f}^{\eta}(x) =0$, we obtain $ 0 \in \partial_{\delta} \left( f + \mathbb{I}_X \right)(x)$. $\bullet$ We thank the referee for the suggestion with regard to implementing the proposed methods on higher dimensional problems of practical interest. Your point is well-taken and we plan on addressing this as follows. (1) By increasing the number of the neurons in our experiments (currently we considered 4 neurons and there are 3140 variables), and (2) By implementing our schemes on the Cifar-10 dataset as you had suggested. We will be happy to incorporate the results in the revised version of the paper. Thank you once again and please let us know if you have any further questions or comments.
Summary: Existing federated optimization algorithms usually rely on the assumption of differentiability and smoothness, which may fail to hold in practical settings. To this end, this paper employs randomized smoothing approach and zeroth-order optimization techniques for the development of FedRZO algorithm to address this kind of problem. Theoretical analyses on convergence, iteration complexity and communication complexity have also been provided. This paper further extend the idea behind the newly developed algorithm to bilevel and minimax federated optimization problems with sound theoretical guarantees. Strengths: 1. This paper has studied three different federated setting that existing works seldomly consider with both theoretical guarantees and empirical justifications. 2. The combination of randomized smoothing technique with zeroth-order optimization appeals to me, which may inspires the future work on non-smooth zeroth-order optimization. 3. Empirical results are interesting with adequate interpretation. Weaknesses: 1. This paper may give more intuitive explanation or interpretation for its equations to ease reader's understanding. E.g., the authors may need to provide certain motivations for the study of the three different cases in the introduction section instead of directly putting out these cases without any explanation. Besides, the authors may also need to provide intuitive explanation for eq. 3 to help justify the reasonability of this assumption as well as its connection with real-world examples. 2. This paper may need to compare with more federated optimization algorithms to verify the efficacy of their algorithms, e.g., SCAFFOLD, FedZO [R1], from both theoretical and empirical perspectives. [R1] Fang, Wenzhi, Ziyi Yu, Yuning Jiang, Yuanming Shi, Colin N. Jones, and Yong Zhou. 2022. “Communication-Efficient Stochastic Zeroth-Order Optimization for Federated Learning.”. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. What's the difference between line 6 of Algorithm 1 and the gradient estimation (or even the algorithm) in FedZO [R1]? They seem to share similar calculation with only different scale that has been multiplied. 2. Are the baselines in Figure 2 able to access the gradient? If so, shouldn't they converge much faster compared with your zeroth-order optimization algorithm? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Limitations haven't been mentioned. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your helpful suggestions and comments on improving this work. $\textbf{Response to weakness 1:}$ Thank you for pointing this out. To be clear, the three formulations are all nondifferentiable nonconvex FL problems. The motivation for the three formulations is mentioned in lines 112--120 and 123--126 in section 2. The federated training of ReLU neural networks has coupled nondifferentiability and nonconvexity. In bilevel and minimax FL, the nondifferentiability and nonconvexity come from the implicit objective function. If the opportunity is provided, we will add more intuitive explanations on technical equations to smooth reader's experience. Regarding eq. (3), as we have noted in the paper, this is an instance of the bounded gradient dissimilarity in SCAFFOLD. Indeed, when the bounded gradient dissimilarity assumption in SCAFFOLD is written for the local functions $0.5\Vert x-\mathcal{P}_{X_i}(x)\Vert^2$ we reach to eq. (3). Further, this assumption holds when for example the generated iterate by the algorithm remains bounded. We imposed this assumption to weaken the boundedness assumption. $\textbf{Response to weakness 2:}$ We would like to point out that both these references assume $L$-smoothness of the local objectives. This is a strong assumption and it often fails to hold when considering bilevel problems. We don't make this assumption and only consider Lipschitz continuity of the objective function. Please note that the progression from smooth nonconvex to nonsmooth nonconvex leads to some challenges and necessitates the introduction of Clarke-stationarity. As we have noted in lines 144--147, our work in designing Algorithm 1 is motivated by recent findings where it is shown that for a subclass of nonsmooth nonconvex functions, computing an $\epsilon$-stationary point is impossible in finite time. $\textbf{Response to question 1:}$ We do agree that FedZO leverages a similar smoothing technique, however, it does not address bilevel problems. One of the key contributions in our work is the design of Algorithm 2 and its complexity analysis in Theorem 2. To clarify, consider the (centralized) bilevel minimization of $f(x,y(x))$ with respect to $x$ where $y(x) \in \hbox{arg}\min_{y \in \mathcal{Y}(x)}\ h(x,y)$. Even when $f$ is smooth and convex in $(x,y)$, the implicit function $f(\bullet,y(\bullet))$ is often nondifferentiable nonconvex in $x$. Second, the analytical form of $y(\bullet)$ is unavailable in most ML applications. As such, $\textbf{the zeroth-order information of the implicit function $f(\bullet,y(\bullet))$ is not available}$ and it is not clear how one can develop a provably convergent zeroth-order method for computing a stationary point to the nonsmooth nonconvex problem $\min_{x}\ f(x,y(x))$. A naive idea is to inexactly compute $y(x)$. However, an inexact computation of $y(x)$ leads to a bias in the approximation of the zeroth-order gradient. This $\textbf{bias further propagates}$ throughout the implementation (see Theorem 2(i) where we manage to derive the aggregated bias as $\sum_{r=0}^R{\varepsilon_r}$). Importantly the bound in Theorem 2(i) implies that even when we inexactly compute $y(x)$ using a standard FL scheme, we are able to derive complexity guarantees for solving bilevel FL problems. Please note that a major technical challenge we faced in designing Algorithm 2 is that $\textbf{inexact evaluations of $y(x)$ must be avoided during the local steps.}$ This is because we consider bilevel problems where both the levels are distributed. Because of this, the inexact evaluation of $y(x)$ by each client in the local step in the upper level would require significant communications and is counter-intuitive to the nature of the FL framework. We carefully address this challenge by introducing delayed inexact computation of $y(x)$. See step 8 in Algorithm 2 and note how $y_{\varepsilon}$ is evaluated at $\hat x_r +v_{T_r}$ that is different than $x_{i,k} +v_{T_r}$. $\textbf{This delayed inexact computation of $y$ renders some technical challenges}$ in the proofs and so, we hope that the design and analysis of Algorithm 2 are not viewed as simple extensions of existing zeroth-order methods. Lastly, we hope to clarify the significance of the results in our paper and how they benefit the research on FL. Our work appears to be the first paper that provides an FL method with complexity guarantees for solving bilevel optimization problems where the $\textbf{lower-level problem may be constrained.}$ There have been several recent works that have highlighted the challenges in solving this type of hierarchical problems (i.e., Stackelberg games), even in centralized settings. For example, consider $\min_{x \in [-1,1]}\ \max_{y \in [-1,1],\ x+y\leq 0}\ x^2+y$. The solution is $(x^*,y^*)=(0.5,-0.5)$. The same problem, but with a reversed order of min and max, $\max_{y \in [-1,1]}\ \min_{x \in [-1,1], \ x+y\leq 0}\ x^2+y$, has the solution $(x^*,y^*)=(-1,1)$. One of the major contributions in our work is that Algorithm 2 can be employed to address this challenging class of problems complicated with the need for federated learning in both levels. To provide more details on this, please see Remark 1 and Table 1 in our paper. We have provided explicit communication complexity results for addressing bilevel FL problems under use of different FL methods for the lower level problem. These are only a few instances of the breadth of FL problems that we can provably address using Algorithm 2. $\textbf{Response to question 2:}$ Thank you for the question. They do have access to the gradient, however they seem to be sensitive to the choice of the parameter $\beta$ in the scaled softplus function (differentiable) defined in line 270, larger $\beta$ means more accurate approximation of the ReLU (nondifferentiable), and ReLU is what we used as the activation function. As shown in Figure 2, our algorithm shows more robustness.
Rebuttal 1: Rebuttal: We sincerely appreciate all the reviewers for their time and thoughtful reviews on improving this paper. The following is a summary of major changes that would appear in the revision if accepted. 1. In response to reviewer "Lumm" and "LNaU", we will add the following note to emphasize the significance of our contributions in addressing bilevel FL problems. This work appears to be the first paper that provides an FL method with complexity guarantees for solving bilevel optimization problems where the $\textbf{lower-level problem may be constrained.}$ There have been several recent works that have highlighted the challenges in solving this type of hierarchical problems (i.e., Stackelberg games), even in centralized settings. For example, consider $\min_{x \in [-1,1]}\ \max_{y \in [-1,1],\ x+y\leq 0}\ x^2+y$. The solution is $(x^*,y^*)=(0.5,-0.5)$. The same problem, but with a reversed order of min and max, $\max_{y \in [-1,1]}\ \min_{x \in [-1,1], \ x+y\leq 0}\ x^2+y$, has the solution $(x^*,y^*)=(-1,1)$. One of the major contributions in this work is that Algorithm 2 can be employed to address this challenging class of problems complicated with the need for federated learning in both levels. To provide more details on this, see Remark 1 and Table 1 where we have provided explicit communication complexity results for addressing bilevel FL problems under the use of suitable FL methods for the lower level problem with different assumptions. These are only a few instances of the breadth of hierarchical FL problems that we can provably address using Algorithm 2. 2. In response to reviewer "Lumm", "CtdY" and "U9HM", we will add more interpretation of technical terms to smooth reader's experience, such as a note on the bounded set dissimilarity (eq. 3) that is an instance of the "bounded gradient dissimilarity" condition, and we will present the definition of Clarke stationary point earlier in section 1. 3. In response to reviewer "Lumm", "CtdY" and "LNaU", we will add a paragraph to discuss and compare other zeroth-order methods in FL and emphasize that they have made stronger assumptions than in our work. 4. In response to reviewer "CtdY" and "U9HM", we will provide all the complexity results explicitly in terms of both problem dimension $n$ and smoothing parameter $\eta$. 5. In response to reviewer "CtdY", we will apply our method on other datasets such as Cifar-10. 6. In response to reviewer "U9HM", we will add a note to discuss the significance of Thm. 1 and Thm. 2. Please see the $\textbf{detailed responses}$ to the reviewers.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Unsupervised Protein-Ligand Binding Energy Prediction via Neural Euler's Rotation Equation
Accept (poster)
Summary: In this paper, the authors developed an energy-based model for unsupervised binding affinity prediction. The energy-based model was trained under SE(3) denoising score matching where the rotation score was predicted by Neural Euler’s Rotation Equation. Experiments on protein-ligand binding and antibody-antigen binding are conducted. Strengths: 1. The paper is well-written and easy to follow. 2. The proposed method outperforms all unsupervised baselines and supervised baselines in the antibody case. 3. The code and data are provided. 4. Systematic ablation studies are performed to show the effectiveness of modules. Weaknesses: 1. The idea of using unsupervised generative models for binding affinity prediction is not new. The authors fail to discuss or compare with related works [1,2] [1] Luo et al., Rotamer Density Estimator is an Unsupervised Learner of the Effect of Mutations on Protein-Protein Interaction, ICLR 23 [2] Guan et al., 3D Equivariant Diffusion for Target-Aware Molecule Generation and Affinity Prediction, ICLR 23 2. The authors only calculate the Pearson correlation coefficient. For the binding affinity prediction, other metrics such as RMSE, MAE, AUROC, and Pearson correlation coefficient. 3. The authors fail to consider the flexibility of proteins and ligands. The sidechains of proteins are also ignored. However, these factors are quite important for protein-ligand/protein binding. Technical Quality: 3 good Clarity: 3 good Questions for Authors: please see the weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitations are well discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and suggestions. Please read our response below and let us know if you have more questions. **Q1**: The authors fail to consider the flexibility of proteins and ligands. The side-chains of proteins are also ignored. However, these factors are quite important for protein-ligand and protein binding. We agree that it is beneficial to model the flexibility of ligands and incorporate the side-chain information of proteins. Given the limited time, we partially address this issue by including all protein atoms (including side-chains) as input instead of only $C_\alpha$ atoms. This modification only changes the input of our model while the model architecture and NERE DSM objective remains the same. We refer to this extended model as NERE DSM (all-atom). Our results (rebuttal Figure 1e) suggest that including side-chain atoms substantially improves the performance of our method on some of the datasets. Please read our global response above for details. In the future, we plan to model the flexibility of rotatable bonds and protein side-chain atoms by introducing a side-chain DSM loss, which adds rotation noise on the rotatable bonds and side-chain atoms and use NERE to predict the rotation noise. **Q2**: The authors only calculate the Pearson correlation coefficient. For the binding affinity prediction, including other metrics such as RMSE, MAE, and Spearman correlation coefficient would be helpful. We report Spearman correlation and p-values as additional metrics in rebuttal Table 1. Our model (NERE DSM) remains the best under all three metrics. Unfortunately, it was not possible to report root mean square error (RMSE) or mean absolute error (MAE) because our model does not predict absolute affinity values. Shifting or scaling $E(A, X)$ by any constant will be equally optimal under the DSM objective. **Q3**: The idea of using unsupervised generative models for binding affinity prediction is not new. The authors fail to discuss or compare with related works [1,2]. Thank you for suggesting these papers and they are quite related. Luo et al. [1] proposed a flow-based generative model to estimate the probability distribution of protein side-chain conformations. Their unsupervised model (named RDE-Linear) uses the learned entropy of side-chains to predict $\Delta\Delta G$ of protein mutations. RDE-Linear and NERE DSM are complementary to each other because they consider different degrees of freedom (DOF). RDE-Linear considers the DOF of side-chains and the backbone structure is fixed. NERE DSM considers the DOF of backbone structure (i.e., different docking angles) but not the DOF of side-chains. To compare NERE DSM with RDE-Linear, we train our model (the all-atom version) on approximately 27000 non-redundant protein-protein complexes (downloaded from PDB) and evaluate it on the entire SKEMPI mutation effect prediction benchmark. For each downloaded complex, we decompose it into pairs of two chains and remove any pairs of chains with buried surface area less than 500. We then clustered these proteins and took one representative complex from each cluster. In each training step, we randomly rotate one of the proteins in a complex and update the model with our NERE DSM objective. The input to our model includes side-chain atoms because the downstream task is mutation effect prediction. At test time, given a pair of wild type and mutated protein complexes, our model first predicts their binding energy $E_\mathrm{wildtype}$ and $E_\mathrm{mutated}$ and calculates $\Delta\Delta G=E_\mathrm{mutated} - E_\mathrm{wildtype}$. Following Luo et al. [1], we report per-structure and overall Pearson and Spearman correlation between predicted and experimental $\Delta\Delta G$. To compute per-structure correlation, we group mutations by structure, discard groups with less than 10 mutation data points, and calculate correlations for each structure separately. As shown in the table below, we find that NERE DSM achieves a much higher performance than RDE-Linear on three out of four metrics. We visualize the overall correlation between predicted and experimental $\Delta\Delta G$ in rebuttal Figure 1h. | SKEMPI | Pearson (per structure) | Spearman (per structure) | Pearson (overall) | Spearman (overall) | |----|----|---|----|---| | RDE-Linear | 0.290 | 0.263 | **0.419** | 0.3514 | | NERE DSM (all-atom) | **0.388±0.020** | **0.402±0.035** | 0.399±0.039 | **0.402±0.013** | Guan et al. [2] proposed TargetDiff, a diffusion model for 3D small molecule design. Similar to our method, TargetDiff is trained on cocrystal structures of protein-ligand complexes in an unsupervised manner. They demonstrated that the unsupervised features learned by TargetDiff could improve supervised affinity prediction. Specifically, they augmented EGNN [3] with the features learned by TargetDiff and trained the augmented model (EGNN + TargetDiff) on the PDBBind training set. To compare our method with TargetDiff, we followed their training/validation/test split and fine-tuned the pre-trained NERE DSM model on the PDBBind training set. As shown in the table below, our method outperforms the EGNN + TargetDiff baseline, which demonstrates the advantage of our approach. The baseline results are copied from Guan et al. 2023. | TargetDiff test set | Pearson | Spearman | |----------------------|----------------|---------------| | TransCPI | 0.576 | 0.540 | | MONN | 0.624 | 0.589 | | IGN | 0.698 | 0.641 | | EGNN | 0.648 | 0.598 | | EGNN + TargetDiff | 0.680 | 0.637 | | NERE DSM (pretrained) | 0.500±0.004 | 0.516±0.013 | | NERE DSM (fine-tuned) | **0.703±0.007** | **0.656±0.017** | References 1. Luo et al., Rotamer Density Estimator is an Unsupervised Learner of the Effect of Mutations on Protein-Protein Interaction, ICLR 23 2. Guan et al., 3D Equivariant Diffusion for Target-Aware Molecule Generation and Affinity Prediction, ICLR 23 3. Satorras et al., E(n) equivariant graph neural networks. ICML 2021 --- Rebuttal Comment 1.1: Title: Reply to the authors Comment: I have read the reply and appreciate the author's reply. Thanks!
Summary: The authors propose an energy-based model for unsupervised binding affinity estimation, which is trained by SE(3) denoising score matching (DSM). Different from standard DSM, they add noise by random rotations and translations on the ligand. Utilizing Euler's rotation equations, the rotation matrix can be derived from predicted forces on each atom, making it possible to directly supervise on the predicted forces instead of the rotation matrix. Experiments on protein-ligand binding and antibody-antigen binding demonstrate the superiority of the proposed method over existing unsupervised method. Strengths: 1. Using Euler's rotation equations to formulate the supervision on the rotations as supervision on the predicted atomic forces is novel and clever. 2. Unsupervised learning on binding affinity is important and meaningful as precise affinity labels are hard to obtain in practical. 3. The docked setting is interesting and reals the potential applications of the proposed method in real scenarios where crystal structure is often hard to obtain. 4. It's very interesting to see that pair-wise energies are mostly lower at CDR-H3 and CDR-L3, which perfectly aligns with the domain knowledge. Weaknesses: 1. The learnt probability distribution $p(A, X)$ might not be well aligned with the actual $p(A, X)$ because the actual one is more complicated than the prior distribution of rotation and translation noises. In this sense, for the learnt $p(A, X)$, $p(A, X) \propto \exp(-E(A, X))$ might not hold. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Under the docked setting, the RMSD is quite large for the generated antibody-antigen complexes (median 19.4), however, the correlation is similar to the crystal setting. Does that mean the model actually did not learn useful geometric interaction information? Otherwise, with such badly docked structures, the model should have much worse performance because the structures can not provide accurate geometric information. 2. Will the correlation be improved if you first denoise the input structure (regardless whether it is crystal structure or docked structure) with the model, then output the energy from the denoised structure? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors have mentioned that the limitation might be only considering the alpha carbon in the antibody-antigen setting. Perturbations on the side chains can be designed with a similar strategy in the future. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and suggestions. Please read our response below and let us know if you have more questions. **Q1**: Under the docked setting, the RMSD is quite large for the generated antibody-antigen complexes (median 19.4), however, the correlation is similar to the crystal setting. Does that mean the model actually did not learn useful geometric interaction information? The reason why our model is robust to docking error is that we use a very loose threshold for residue contact. In NERE DSM, the predicted binding energy is a sum of pairwise residue interaction energy: $E(A, X) = \sum_{i,j: d_{ij} < 20} f_o(h_i, h_j)$. In this formula, we consider two residues to be interacting (a.k.a., forming a contact) if their $C_\alpha$ distance $d_{ij}<20$. This threshold is much larger than standard practice (5-8Å). In fact, we find that the model becomes a lot more sensitive to docking error if we set the contact threshold to 10Å or lower. With a tighter contact threshold, the true contacts are likely to be lost in the docked setting and the model cannot extract useful geometric information for binding energy prediction. **Q2**: Will the correlation be improved if you first denoise the input structure (regardless whether it is crystal structure or docked structure) with the model, then output the energy from the denoised structure? We found that the performance is roughly the same with or without this denoising step at inference time (see the table below). | | CASF (crystal) | CASF (docked) | SAbDab (crystal) | SAbDab (docked) | |----------------------|----------------|---------------|------------------|-----------------| | without denoising | 0.656 ± 0.012 | 0.651 ± 0.011 | 0.361 ± 0.051 | 0.360 ± 0.051 | | with denoising | 0.655 ± 0.013 | 0.649 ± 0.012 | 0.367 ± 0.029 | 0.359 ± 0.043 | --- Rebuttal Comment 1.1: Title: Thanks Comment: Thanks for the response. However, I'm still concerned with the antigen-antibody experiments on docked structures. A median RMSD of 19.4 actually means that the orientation and relative position of the antibody to the antigen are barely correct. Though with larger epitope the correct interaction pairs might still be included in the calculation of the energy, they are not providing the correct geometric information! For example, an actually interacting pair might has long distance in the docked structure, and if the model learnt the true correlation between the geometry and the energy (e.g. interacting pairs with shorter distances might contribute more to the binding affinity), it should fail on such cases because the provided geometric information is wrong! Therefore, could you plot the correlation between the docking RMSD and the error of the predicted energy? It is better if you could compare the pair-wise energy contribution predicted by the model to those calculated by physical softwares (e.g. Rosetta). --- Reply to Comment 1.1.1: Title: Additional analysis Comment: Thank you for your comments. To analyze how docking RMSD affects the performance of NERE DSM, we group the SAbDab test set into multiple bins based on RMSD and report the performance of NERE DSM for each bin. The result is shown in the table below. Here are our main findings: 1. We realize that 42.8% of the test cases have RMSD less than 10Å. According to CAPRI [1], a docking pose with RMSD less than 10Å is considered correct. While the median RMSD is 19.4Å, we still have a fair number of docked instances that are correct. 2. We find that the model performance decreases as docking RMSD increases. When RMSD is above 40Å, the performance is almost zero, which shows that the performance of our model is influenced by the docking error. 3. We notice that the model performance is surprisingly decent when docking RMSD is between 10Å and 30Å. It seems that the learned energy function is not sensitive enough to geometrical changes. We agree that some geometric information seems to be lost in the learned representation, which reveals the limitation of the encoder architecture we used in this paper (frame-averaging neural network [2]). We will explore alternative geometric encoder architectures in the future to improve our method. 4. We did not compare NERE's pairwise energy contribution with Rosetta because the performance of Rosetta is quite poor on our test set (correlation = 0.025). We were afraid that the Rosetta pairwise energy itself may be inaccurate. In summary, our results show that the performance of NERE is influenced by the docking error, but it is not sensitive as we expected. Please let us know if you have any more questions. | RMSD range | # test cases | Correlation | |------:|-------------:|------------:| | 0-10 | 238 | 0.506 | | 10-20 | 47 | 0.441 | | 20-30 | 69 | 0.410 | | 30-40 | 59 | 0.268 | | 40-50 | 51 | 0.030 | | >50 | 91 | -0.031 | [1] Janin et al., CAPRI: a Critical Assessment of PRedicted Interactions, Proteins 2003 [2] Puny et al., Frame Averaging for Invariant and Equivariant Network Design, ICLR 2022
Summary: The authors address the problem of protein-ligand binding. Specifically, the authors reformulate binding energy prediction as a generative modeling task: they train an energy-based model on a set of unlabelled protein-ligand complexes using denoising score matching and interpret its log-likelihood as binding affinity. The key contribution of this work is a new equivariant rotation prediction network for SE(3) DSM called Neural Euler’s Rotation Equations (NERE). Strengths: * The work is easy to follow and ideas are presented in a clear way * The proposed approach (with certain particularities) generalizes to both small and large molecules Weaknesses: NERE DSM relies on cocrystal structures for training and cocrystal/docked structures for binding prediction of a new protein-ligand / Ab-Ag pair, which can be limiting in terms of experimental/computational resources. In addition, NERE DSM does not predict binding affinity directly, hence the model is evaluated using correlation instead of RMSE. The utility of NERE DSM in real-world scenarios where we seek to predict affinities for a large set of protein-ligand / Ab-Ag pairs seems limited. Moreover, the extremely slow rate at which new cocrystal data becomes available is an additional limiting factor. Could the authors comment on this? Technical Quality: 3 good Clarity: 3 good Questions for Authors: ### Questions * I’m curious, why were only the top 50 antigen residues chosen instead of all residues within a certain distance, as done for the protein in the small molecule case? * Could you provide more details on the nature of the test splits? Are they IID w.r.t. the training set, or are they split by sequence similarity/some other criteria? ### Questions regarding protein-ligand binding: * Why did the authors choose to compare their model to supervised models trained on 19K labeled affinity datapoints? Did I understand correctly that the 4806 datapoints used to train NERE DSM and the 19K are datapoints used to train the supervised models are disjoint? Is there no cases where affinity data is available together with cocrystal data so both models could be trained on the same dataset? * How long does AutoDock Vina take to dock a protein-ligand pair on average (in your case)? ### Questions regarding Ab-Ag binding: * It is not entirely clear to me, are the supervised pre-trained? * I have the same question here as for protein-ligand binding, is there no cases where affinity data is available together with cocrystal data so both (unsupervised and supervised) models could be trained on the same dataset? * How long does ZDOCK take to dock a protein-ligand pair on average (in your case)? ### Questions regarding ablation studies: * In lines 292-294, the authors say “Interestingly, we find that the model pays the most attention to CDR-H3 and CDR-L3 residues. In most test cases, their energy is much lower than other CDR residues. This agrees with the domain knowledge that CDR-H3 and CDR-L3 residues are the major determinant of binding “. To support this claim, the authors only show one Ab-Ag example which does not guarantee that the same trend is observed across all samples in the test set. Could you report a global metric, for example, the average difference in the test set between energies in CDRs vs energies in remaining Ab regions (plus statistical significance)? ### Minor comments * In lines 72-73, “Due to scarcity of binding affinity data, very few work has explored supervised learning for antibody binding affinity prediction.” Should be “Due to scarcity of binding affinity data, very few works have explored supervised learning for antibody binding affinity prediction.” * 
Experiments were repeated with 5 random seeds but only the average values were reported in table 1, could you also report the standard deviation with ±? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: In my opinion, the authors have not entirely addressed the limitations of their work, specifically, in relation to my comment in the "Weaknesses" section I set my assessment to "borderline reject" but my opinion can change based on the author's answers and comments Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and suggestions. Please read our response below and let us know if you have more questions. **Q1**: NERE DSM relies on co-crystal structures for training and co-crystal/docked structures for binding prediction of a new protein-ligand / Ab-Ag pair, which can be limiting in terms of experimental/computational resources. In terms of computational resources, modern molecular docking tools (based on neural networks) are much faster than traditional docking tools like AutoDock Vina and ZDOCK. For example, PLANET [1] is a neural network model for protein-ligand docking. On our CASF test set, it takes only about 0.1 second for PLANET to dock one ligand (on average). GeoDock [2] is another neural network model for protein-protein docking. On our SAbDab test set, it takes about 0.1 second to dock one antibody to its antigen. Using these neural network based docking models, we can screen over millions of compounds/antibodies per day. We also evaluated the performance of our model when using the structures docked by PLANET and GeoDock. As shown in the table below, there is only a small decrease in Pearson correlation but the model remains accurate. Therefore, we argue that computational resources are not limiting factors to our unsupervised binding energy prediction framework. | | Slow Docking (Vina/ZDOCK) | Ultrafast Docking (PLANET/GeoDock) | |----|---|----| | CASF | 0.656 ± 0.012 | 0.616 ± 0.009 | | SAbDab | 0.360 ± 0.051 | 0.350 ± 0.069 | In terms of experimental resources, our method requires co-crystal structures for training. We agree that it can be a limiting factor but we can alleviate this issue by utilizing a larger set of protein-protein complexes to pre-train our model. There are 208,347 co-crystal structures in the Protein Data Bank in total and its growth is about 10,000 per year. We will explore this direction in our future work. **Q2**: Docking speed of AutoDock Vina, ZDOCK, and other docking models. The docking speed of AutoDock Vina and ZDOCK is 25 second and 20 second per complex (using a 64-CPU server). The docking speed is much faster for neural network docking tools like GeoDock and PLANET, with approximately 0.1 second per complex. **Q3**: Are there no cases where affinity data is available together with co-crystal data so both supervised and unsupervised models could be trained on the same dataset? For antibodies, there are only 566 co-crystal structures that have affinity labels. They are all included in the test set and could not be used to train our model. For small molecules, the co-crystal structures in our training set actually come with affinity labels, though we did not use the affinity labels to train our model. For a fair comparison, we trained a supervised model on the same training set and model architecture as NERE DSM, but using affinity labels and a regression training objective. This supervised model achieves an average Pearson correlation of 0.734 ± 0.012 on the CASF test set, which is lower than TankBind or IGN because it uses less training data. This supervised model is better than NERE DSM but the margin is relatively small (0.734 vs 0.656). In this case, our unsupervised method is able to recover nearly 90% of the supervised model performance. **Q4**: Why were only the top 50 antigen residues chosen as epitopes instead of all residues within a certain distance, as done for the protein in the small molecule case? We found that the latter strategy yielded a lower performance (Pearson R = 0.317) because the number of epitope residues are quite uneven when using a distance threshold. The performance becomes better when using a fixed number of epitope residues. **Q5**: Details on the nature of the test splits The current train/test split is based on IID, but we filtered the training set so that the same ligand/antibody/antigen does not appear in both training and test sets. To further study the impact of train/test split strategy, we also train our model with sequence similarity split. In this case, we remove all training instances whose proteins/antibodies/antigens have 40% similarity to the test set. As shown in the following table, we find that the average performance of NERE is not impacted by similarity split (0.508 vs 0.503), which confirms that our model is not making predictions simply based on similarity. | | Original split | Similarity split | |----|----|-----| | CASF (Crystal) | 0.656 ± 0.012 | 0.634 ± 0.009 | | SAbDab (Crystal) | 0.360 ± 0.051 | 0.372 ± 0.038 | | Average | 0.508 | 0.503 | **Q6**: Could you report a global metric, for example, the average difference in the test set between the energy of CDR3 residues versus non-CDR3 residues (plus statistical significance)? The average difference between the energy of a CDR3 residue and a non-CDR3 residue is -2.18, with a p-value of 5.8e-60 under student t-test. From the rebuttal Figure 1g, we can clearly see that the average difference is shifted towards negative range. Therefore, we conclude that the model pays more attention to CDR3 residues than non-CDR3 residues. **Q7**: Experiments were repeated with 5 random seeds but only the average values were reported in table 1, could you also report the standard deviation with ±? The standard deviation is already reported in table 1 as subscripts (e.g., $0.656_{.011}$ means 0.656 ± 0.011). **Q8**: In the antibody-antigen case, are the supervised model pre-trained? Yes, the supervised $\mathrm{FANN_{transfer}}$ model is pre-trained on the SKEMPI protein-protein binding database, which contains approximately 6000 binding affinity data points. This model also uses the pre-trained ESM-2 protein language model embedding for residue features. Reference 1. Zhang et al. Planet: A multi-objective graph neural network model for protein-ligand binding affinity prediction. bioRxiv 2023 2. Chu et al. Flexible Protein-Protein Docking with a Multi-Track Iterative Transformer. bioRxiv 2023 --- Rebuttal Comment 1.1: Title: Thanks Comment: I thank the authors for their thoughtful responses. The authors addressed all my questions. I have 2 remaining comments: 1. Regarding response to Q1: "In terms of experimental resources, our method requires co-crystal structures for training. We agree that it can be a limiting factor but we can alleviate this issue by utilizing a larger set of protein-protein complexes to pre-train our model. There are 208,347 co-crystal structures in the Protein Data Bank in total and its growth is about 10,000 per year. We will explore this direction in our future work." There is no guarantee that pretraining on PPI data would help, particularly for the Ab-Ag prediction case. In any case, pretraining does not remove the model's limitation of only being trainable with co-crystal structures. Any thoughts on adapting the model to be trainable with docked structures? 2. Regarding Q5, was there a reasoning/justification behind the 40% similarity cutoff? --- Reply to Comment 1.1.1: Title: Thanks Comment: Thank you for your thoughtful comments. Q1: Any thoughts on adapting the model to be trainable with docked structures? Our model can be directly trained on docked structures if they are accurate (close to crystal structures). In the case of protein-ligand binding, we can leverage docked structures from the CrossDocked2020 dataset [1], which has 22.5 million docked poses. We can use their binding pose classification model (accuracy = 0.95 AUC) to evaluate the accuracy of each docked structure and add only accurate structures to our training set. The same approach works for antibody-antigen binding. For example, we can use state-of-the-art protein complex structure prediction models like AlphaFold-multimer (AFM) [2] to generate a large set of antibody-antigen complexes. AFM gives a confidence score for each predicted structure that indicates how likely a docked pose is correct. The docking success rate of AFM on the standard antibody-antigen docking benchmark [3] is 28.4%, but the success rate becomes 80% if we only look at docked complexes with confidence above 0.8. Therefore, we can add these highly confident complexes into the training set without contaminating it with too many incorrect poses. Q2: Was there a reasoning/justification behind the 40% similarity cutoff? We used the 40% similarity cutoff because we saw that some prior work on antibody generation [4, 5] used the same similarity cutoff for their experiments. In addition, we found that most of the similar instances are filtered at the 70-90% threshold. For example, in the protein-ligand case, the original training set had 4806 complexes. With the 90% cutoff, the training set size became 3352. With the 40% cutoff, the training set size became 3064. The filtering speed was 1450 (per 10%) in the beginning and 60 (per 10%) after that. Based on these two observations, we thought that the 40% similarity cutoff was reasonable. References 1. Francoeur et al. 3D Convolutional Neural Networks and a CrossDocked Dataset for Structure-Based Drug Design, 2020 2. Evans et al. "Protein complex prediction with AlphaFold-Multimer." biorxiv 2021 3. Guest et al. "An expanded benchmark for antibody-antigen docking and affinity prediction reveals insights into antibody recognition determinants." Structure 2021 4. Jin et al. "Iterative refinement graph neural network for antibody sequence-structure co-design." ICLR 2022 5. Luo et al. Antigen-Specific Antibody Design and Optimization with Diffusion-Based Generative Models for Protein Structures. NeurIPS 2022
Summary: The paper introduces an unsupervised learning approach for predicting protein-ligand binding energy, called NERE (Neural Equivariant Rotation Estimation). The authors employ SE(3) denoising score matching to train an energy-based model on a dataset of unlabeled protein-ligand complexes. The model's effectiveness is demonstrated through evaluations on protein-ligand and antibody-antigen binding affinity benchmarks from PDBBind and the Structural Antibody Database (SAbDab). Strengths: 1. The paper innovatively applies SE(3) score matching to train an energy-based model, positioning it as an unsupervised energy ranking model. 2. The experimental results highlight the model's effectiveness, particularly when compared with other unsupervised models or physics-based models. 3. The paper is well-structured and easy to comprehend, making the proposed method and its implications clear to the reader. Weaknesses: 1. The proposed method only considers rigid transformations (translations & rotations) in the diffusion process, neglecting the degrees of freedom in small molecule ligands, such as additional rotatable bonds. 2. The method employs a residue-level representation for proteins, which might omit fine-grained information that could be crucial for accurate binding prediction. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Why was the SE(3) diffusion process chosen over the standard elucidating diffusion process? While the standard diffusion process may generate nonsensical conformations, the proposed diffusion process might not explore the entire conformation space. 2. In section 4.1, is the center of mass, denoted as $\mu$, computed based on atomic weight? 3. How does the model perform in the "Docked structure" scenario when the docking error is high? 4. Why does contrastive learning underperform compared to using score matching to train the energy-based model? Further elaboration on this would be beneficial. 5. Are there any experiments conducted on the docking benchmark of the CASF test set? It seems that the proposed method is more suitable for identifying the correct binding pose (ranking among multiple poses of a specific molecule) rather than ranking between binding poses of multiple molecules. 6. Different noise levels appear to lead to different energy functions, while only the minimal noise level can approximate the energy function of the real data distribution. How does the proposed method handle multiple noise scales? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The proposed method appears to be sensitive to docking error. However, in real-world scenarios, finding the true binding pose is a challenging problem. This sensitivity could limit the method's practical applicability. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and suggestions. Please read our response below and let us know if you have more questions. **Q1**: The proposed method only considers rigid transformations (translations & rotations) in the diffusion process, neglecting the degrees of freedom in small molecule ligands, such as rotatable bonds. The method employs a residue-level representation for proteins, which might omit fine-grained information that could be crucial for accurate binding prediction. We agree that it is beneficial to model the flexibility of ligands and incorporate the side-chain information of proteins. Given the limited time, we partially address this issue by including all protein atoms (including side-chains) as input instead of only $C_\alpha$ atoms. This modification only changes the input of our model while the model architecture and NERE DSM objective remains the same. Our results suggest that including side-chain atoms substantially improves the performance of our method on some of the datasets. Please read our global response above and the rebuttal Figure 1e for details. In the future, we plan to model the flexibility of rotatable bonds and protein side-chain atoms by introducing a side-chain DSM loss, which adds rotation noise on the rotatable bonds and side-chain atoms and use NERE to predict the rotation noise. **Q2**: Why was the SE(3) diffusion process chosen over the standard Euclidean diffusion process? While the standard diffusion process may generate nonsensical conformations, the proposed diffusion process might not explore the entire conformation space. We have already included results of the standard Euclidean diffusion process in our original manuscript (named as Gauss DSM in Table 1). As shown in the table below, our results suggest that SE(3) diffusion is better than Euclidean diffusion on both test sets. | | CASF (crystal) | CASF (docked) | SAbDab (crystal) | SAbDab (docked) | |----------------------|----------------|---------------|------------------|-----------------| | Eucliean diffusion | 0.638 ± 0.017 | 0.632 ± 0.016 | 0.335 ± 0.038 | 0.330 ± 0.022 | | SE(3) diffusion | 0.656 ± 0.012 | 0.651 ± 0.011 | 0.361 ± 0.051 | 0.360 ± 0.051 | **Q3**: In section 4.1, is the center of mass computed based on atomic weight? For small molecule ligands, the center of mass is set as a constant for all atom types because the atomic weight of the most common atoms are quite close (e.g., C=12, N=14, O=16). For antibodies, the center of mass is computed based on amino acid weight because they are more diverse (e.g., Glycine = 57, tryptophan = 204). To validate this modeling choice, we run our model with either constant weight or true atomic weight. As shown in the table below, the effect of using atomic weight is negligible for small molecules but quite helpful for antibodies. | | CASF (crystal) | SAbDab (crystal) | |---------------|----------------|------------------| | Constant weight | 0.656 | 0.340 | | True atomic/residue weight | 0.657 | 0.352 | **Q4**: How does the model perform in the "Docked structure" scenario when the docking error is high? Our model performance is quite robust to docking error. In the antibody case, the RMSD of ZDOCK is around 19.4 but the performance of NERE DSM under docked structures is quite close to crystal structures (Pearson correlation: 0.361 vs 0.360; Spearman correlation: 0.385 vs 0.363). **Q5**: Why does contrastive learning underperform denoising score matching (DSM)? Further elaboration on this would be beneficial. Contrastive learning is a simpler objective than DSM because it only teaches the model to assign lower energy to crystal structures than perturbed structures. In contrast, the DSM objective not only teaches the model to assign lower energy to crystal structure, but also to reconstruct the original ligand/antibody pose from perturbed structures. It requires the model to leverage more geometric information in order to succeed in this more difficult task. **Q6**: Are there any experiments conducted on the docking benchmark of the CASF test set? It seems that the proposed method is more suitable for identifying the correct binding pose (ranking among multiple poses of a specific molecule) rather than ranking between binding poses of multiple molecules. We agree that the docking experiment is also relevant to our approach. Our current focus is protein-ligand binding affinity prediction but we will include this experiment in our future work. Thank you for your suggestions. **Q7**: Different noise levels appear to lead to different energy functions, while only the minimal noise level can approximate the energy function of the real data distribution. How does the proposed method handle multiple noise scales? In our implementation, we use multiple noise scales to train our model. In each training step, we first randomly sample a noise scale $\sigma \sim [0, 10]$ for the rotation noise and then sample a rotation vector $\omega \sim \mathcal{N}_\mathrm{SO(3)}(\sigma)$. This dynamic sampling strategy allows us to sample noise of both small and large scales. The small-scale noise helps the model to approximate the real data distribution, while the large-scale noise allows the model to be trained on a diverse range of perturbed structures. --- Rebuttal Comment 1.1: Comment: Thanks for your response. I appreciate your efforts to address my concerns, but I remain concerned aboout the reasonability of varying noise levels. Specifically, introducing different noise levels into the data distribution results in diverse corrupted distributions, each of which may correspond to a distinct energy function. An extreme case would be the addition of extremely high levels of noise, which could obliterate all information from the original distribution. Yet, in the training objective, a single energy function is employed to model different noise levels simultaneously. Could this approach have some potential problems? --- Reply to Comment 1.1.1: Title: Multiple noise level Comment: Thank you for your comment. Our approach is inspired by Noise Conditional Score Network (NCSN, Yang et al 2020), where they sampled different noise levels during training. The difference between our approach and NCSN is that our model is not conditioned on the noise level. As a result, our current training objective is matching our energy function to a mixture distribution, i.e., a mixture of perturbed distributions given by different noise levels. This mixture distribution is a valid distribution (analogous to Gaussian mixture models), but could be sub-optimal because it is not close enough to the original data distribution. Currently, we are not adding extreme levels of noise and that's why our model performs reasonably well. In principle, there are two solutions to address your concern. First, we can train our model with only one (small) noise level. We can tune this hyper-parameter on the validation set to find the optimal noise level. Second, we can condition the energy function on the noise level by appending the input with some encoding of $\sigma$. Similar to NCSN, we are learning multiple energy functions (with shared parameters) at the same time, each corresponds to one noise level. At test time, we can choose the one with the best cross-validation performance. We will explore this direction in our future work.
Rebuttal 1: Rebuttal: We want to thank all reviewers for their valuable comments and suggestions. We would like to summarize three main results here in response to some common questions/suggestions. The results are included in the attached PDF file (rebuttal Table 1 and Figure 1a-h). ### Additional metrics As suggested by Reviewer X8hw and UbeA, we report Spearman correlation and p-values as additional metrics in rebuttal Table 1. We could not report root mean square error (RMSE) or mean absolute error (MAE) because our model does not predict absolute affinity values. Shifting or scaling $E(A, X)$ by any constant will be equally optimal under the DSM objective. ### Additional benchmark datasets We include two additional benchmark datasets to incorporate reviewer X8hw’s suggestions. First, we adopt an HER2 antibody affinity maturation dataset [1] to investigate the utility of NERE DSM as a pre-training strategy. This dataset has 422 antibody sequences with experimental binding affinities against HER2. We report the Spearman correlation between the predicted binding energy and experimental binding affinity on this dataset. To investigate whether the learned representations from the unsupervised approach can be beneficial for supervised downstream tasks, we randomly sample a small number of HER2 data to fine-tune the pre-trained NERE model. Results on this dataset are shown in rebuttal Figure 1f. When fine-tuned on the same number of HER2 data, we find that representations learned by NERE outperform representations learned under supervised pre-training ($\mathrm{FANN_{transfer}}$). Second, we adopt a free energy perturbation (FEP) benchmark [2] (developed by Merck) to investigate the utility of unsupervised learning in the context of protein-ligand binding. It has eight protein targets (cdk8, cmet, eg5, hif2a, pfkfb3, shp2, syk, and tnks2) and 264 ligands in total. We calculate the Spearman correlation ($R_S$) between the predicted binding energy and experimental binding affinity for each of the eight targets and report the average correlation for each method. Results on this dataset are shown in rebuttal Figure 1b-d. Our main finding is that NERE outperforms supervised baselines on this FEP test set. These results suggest that unsupervised models can be also useful for protein-ligand binding even though there is a fair amount of labeled affinity data. The experimental setup for these two experiments are detailed in our response to reviewer X8hw. ### Including side-chain atoms into the model Several reviewers have raised concerns that the current model does not model the flexibility of proteins/ligands and the side chains of proteins. Given the limited time, we partially address these concerns by including all protein atoms (backbone + side chains) as input instead of only $C_\alpha$ atoms. This modification only changes the input of our model while the model architecture and NERE DSM objective remains the same. We refer to this extended model as NERE DSM (all-atom) and the original model as NERE DSM ($C_\alpha$-only). As shown in rebuttal Figure 1e, we found that adding side-chain information gave a similar performance on the original CASF and SAbDab test sets, but substantially improved model performance on the new FEP and HER2 datasets. On the FEP test set, the all-atom model achieved $R_S=0.380$ while the $C_\alpha$-only model had $R_S=0.192$. On the HER2 test set, the all-atom model achieved $R_S=0.230$ while the $C_\alpha$-only model had only $R_S=0.043$. Overall, these results suggest that incorporating side-chain information can be beneficial for protein-ligand/antibody-antigen binding energy prediction. In the future, we plan to model the flexibility of rotatable bonds and protein side-chain atoms by introducing a side-chain DSM loss, which adds rotation noise on the rotatable bonds and side-chain atoms and use NERE to predict the rotation noise. ### References 1. Shanehsazzadeh et al., Unlocking de novo antibody design with generative artificial intelligence, biorxiv 2023 2. Schindler et al., Large-scale assessment of binding free energy calculations in active drug discovery projects, 2020 Pdf: /pdf/4b6984abece73fc1e5a4b1cb0402729828997a2a.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces NERE, an unsupervised method for predicting protein-ligand binding affinity. The authors propose a generative modeling approach that utilizes an energy-based model (EBM) to capture the characteristics of protein-ligand complexes. The EBM is trained by maximizing the log-likelihood of crystal structures from the PDB database. NERE incorporates random rotations and translations of the separate parts of the molecule complexes and predicts the optimal rotation and translation as the EBM's output. To train the EBM, SE(3) denoising score matching is employed. The effectiveness of the proposed approach is evaluated through experiments on both protein-ligand and protein-protein binding. The results reveal a positive correlation between the predicted energy and binding affinity. The authors demonstrate that their method surpasses all unsupervised baselines and even outperforms supervised baselines in the case of antibodies. Strengths: Strengths of the review paper: - The paper introduces a novel approach for unsupervised protein-ligand binding energy prediction by utilizing an equivariant rotation prediction network. This innovative approach contributes to the existing body of knowledge in the field. - The paper proposes a unique method to predict rotation and translation as part of an unsupervised data augmentation technique. This approach enhances the understanding and modeling of protein-ligand interactions, as well as the correlation with molecule interaction energies. - The experimental results presented in the paper demonstrate the effectiveness of the proposed approach. The approach outperforms all unsupervised baselines, showcasing its superiority in predicting protein-ligand binding energies. Notably, the method even surpasses supervised baselines in the case of antibodies, indicating its potential for practical applications. - The paper exhibits clear and well-written content. The authors effectively convey technical concepts and methods, ensuring the readers can grasp the details of the proposed approach and its implementation. The organization of the paper enhances readability and comprehension. - The authors demonstrate their awareness of the limitations of their work. Weaknesses: - The paper lacks visualizations of the correlation between the predicted energies and binding affinities. Visual representations, such as scatter plots or correlation matrices, would enhance the understanding of the relationship and provide a clearer picture of the results. Additionally, providing information about the distribution of outliers would give insights into the robustness of the proposed approach. - The authors could consider spearman correlation and p-value as additional evaluation metrics, as the ranking between different ligands might be more useful for drug discovery and virtual screening instead of absolute binding affinity values. - I am curious about the potential impact of the proposed unsupervised method on downstream tasks when serving as a pre-training strategy. It would be valuable to investigate whether the learned representations from the unsupervised approach can be beneficial for supervised downstream tasks. - The paper lacks an explicit discussion on the significance and relevance of an unsupervised method in the context of protein-ligand binding energy prediction. It would be beneficial to design experiments or provide theoretical arguments that illustrate how the utilization of unlabeled data, in comparison to traditional supervised methods, contributes to the understanding and performance of the model. Demonstrating the utility of unsupervised learning in this specific area would provide a stronger rationale for the proposed approach. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and suggestions. Please read our response below and let us know if you have more questions. **Q1**: The paper lacks visualizations of the correlation between the predicted energies and binding affinities. In rebuttal Figure 1a, we visualize the correlation between the predicted energies and binding affinities on the small molecule test set (CASF) and the antibody test set (SAbDab). We did not find significant outliers in either crystal or docked settings. **Q2**: The authors could consider spearman correlation and p-value as additional evaluation metrics In the rebuttal Table 1, we report the spearman correlation and p-value of all methods on both test sets. Our model (NERE DSM) remains the best under both metrics. **Q3**: I am curious about the potential impact of the proposed unsupervised method on downstream tasks when serving as a pre-training strategy. It would be valuable to investigate whether the learned representations from the unsupervised approach can be beneficial for supervised downstream tasks. We adopt an HER2 antibody affinity maturation dataset [1] as an additional benchmark to investigate the utility of NERE DSM as a pre-training strategy. This dataset has 422 antibody sequences with experimental binding affinities against HER2. The pre-trained NERE DSM model achieved an average spearman correlation of 0.230 on this dataset, while the best supervised baseline ($\mathrm{FANN_{transfer}}$) only yields 0.090. To further improve the performance, we selected a subset of the HER2 data to fine-tune the pre-trained NERE DSM model and the supervised $\mathrm{FANN_{transfer}}$ model. Specifically, we split the HER2 dataset into a training set of $N$ data points and use the rest of the dataset for testing. As shown in rebuttal Figure 1f, NERE DSM consistently outperforms $\mathrm{FANN_{transfer}}$ when fine-tuned by $N=50, 100, 150, 200$ data points. These results suggest that NERE DSM is a more effective pre-training strategy than supervised pre-training. **Q4**: The paper lacks an explicit discussion on the significance and relevance of an unsupervised method in the context of protein-ligand binding energy prediction. It would be beneficial to design experiments that illustrate how the utilization of unlabeled data, in comparison to traditional supervised methods, contributes to the performance of the model. Indeed, the performance of unsupervised methods is much lower than supervised models on the CASF test set (PDBBind core set). However, the supervised models are trained and evaluated on the same PDBBind database and it is still unclear whether they generalize well to new molecules with different structures. To this end, we adopt the free energy perturbation (FEP) benchmark [2] (developed by Merck) as an additional independent test set. It has eight protein targets (cdk8, cmet, eg5, hif2a, pfkfb3, shp2, syk, and tnks2) and 264 ligands in total. As shown in Figure 1b, molecules in this test set have a different distribution from the PDBBind training set. Therefore, a model needs to generalize to a different chemical space to perform well on this test set. We applied NERE DSM to this FEP test set, with a small modification in the input layer to include protein side-chain atoms (see our global response above). We calculate the Spearman correlation ($R_S$) between its predicted binding energy and experimental binding affinity for each of the eight targets and report the average correlation as our evaluation metric. Importantly, all hyper-parameters are selected using only the PDBBind validation set to ensure this experiment is a blind, unbiased evaluation. As shown in Figure 1c, our model outperforms not only the unsupervised models (contrastive learning and Gaussian DSM) but also the supervised models. The performance is much better than PLANET & IGN (0.380 vs 0.314 and 0.252) and slightly better than TankBind (0.380 vs 0.376). The Spearman correlation for each target is shown in Figure 1d. Overall, these results suggest that unsupervised models are more robust than supervised models when there is distributional shift (a common problem when models are deployed in the wild). Therefore, unsupervised models can be also useful for protein-ligand binding, even though there is a fair amount of labeled affinity data. References 1. Shanehsazzadeh et al., Unlocking de novo antibody design with generative artificial intelligence, biorxiv 2023 2. Schindler et al., Large-scale assessment of binding free energy calculations in active drug discovery projects, 2020 --- Rebuttal Comment 1.1: Title: Thank you for your response Comment: Thank you for your detailed response. I think the results are more convincing than before and this an interesting task so I have raised my score accordingly. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you very much. Your comments are very beneficial to us and we sincerely appreciate your effort in reviewing our paper.
null
null
null
null
null
null
Accelerated On-Device Forward Neural Network Training with Module-Wise Descending Asynchronism
Accept (poster)
Summary: The paper focuses on enabling training on memory-constrained platforms. Instead of using conventional backpropagation-based methods, the proposed approach is based on forward gradient descent (FGD), which approximates the gradients through only forward passes. The paper claims that a brute-force utilisation of FGD leads to resource underutilisation in memory-constrained devices. To counteract this, the work proposes AsyncFGD, a method that enables multiple parallel workers to process different parts of the network using different samples, while allowing varying levels of staleness in the parameters of these parts. Strengths: 1) Reducing the memory requirements of the training stage is a key and still open challenge towards enabling on-device training. In this context, FGD-based algorithms constitute an interesting approach that is still largely unexplored. 2) The paper provides both theoretical analysis and empirical evaluation of the proposed approach. Weaknesses: 1) In the motivation of work, the paper makes a number of arguments that are hard to justify. In Section 2.2, the paper claims that "in edge devices with limited computational power, potential idle workers due to synchronization in synchronous pipelines like [12] are not optimal." However, this is rarely, or not at all, the case. Specifically, edge devices typically utilise their processing resources fully by exploiting the intra-layer parallelism, *e.g.* by performing convolutions or matrix multiplications. For the processor of such platforms to remain underutilised, either the device lies more towards server-grade or the target model is very lightweight, with small layers that contain minimal parallelism. This could make more sense in server settings, where layer pipelining has already been proposed (GPipe, PipeDream) as also mentioned in the paper. Therefore, the argument about Forward Locking and the serialisation of computations in the forward pass of FGD-based methods does not seem to be relevant for memory-constrained edge devices, where layer pipelining is hardly used during training. As such, there are important questions with respect to the suitability of AsyncFGD. The presented implementation uses multiple workers to process different parts of the network, *i.e.* it employs module pipeling. For this to make sense as a strategy, the computational resources of the target device must be underutilised in the first place, *i.e.* when module pipelining is not used. In this context, what about batching? Batch processing is not touched upon in the paper. Why not have different workers processing different samples from the same batch, thus not requiring any stale parameters? 2) In Section 4.3, the paper makes a claim with respect to the attainable speedup. Specifically, it states that "It is easy to see that AsyncFGD can fully utilize the computation resources, thus achieving considerable speedup". This statement is very strong and cannot be supported solely by Fig. 2. In Fig. 2, we see a simple performance model for AsyncFGD, which indicates the potential acceleration *under certain conditions*. These conditions include (among others) *i)* whether the processor is really underutilised, *i.e.* whether there are really free cycles to have more than one module running, and *ii)* whether the multiple workers become external memory bandwidth-bound, in which case AsyncFGD can lead to slowdown - this is very common in edge devices where multiple components of the target chip share the same bandwidth to the external memory. In the scope of this work that aims at memory-constrained devices, the assumption that the memory bandwidth is abundant contradicts the whole setup. Overall, the logic of the paper with respect to acceleration leaves out many important considerations. 3) In Section 6.6, one would expect the speedup to plateau or even lead to slowdown, when *K* increases above a certain point, where the workers would start to context-switch excessively. How is *K* selected? What exactly affects it? Both the model architecture, the target dataset and the target platform would play important roles (see also point 2 above on this). In its current version, the paper is missing a thorough investigation of this fact. 4) In Section 6.2, while the objective of the experiments and the selection of baselines are sound, the network architectures are arbitrary and simple. It is hard to draw conclusions on the efficacy of AsyncFGD. 5) In Section 6.4, it is not clear what the setup is. The models are pre-trained on ImageNet and then fine-tuned on a given dataset using one of the algorithms in Table 2. Why does AsyncFGD* give higher accuracy than FGD and plain AsyncFGD? Furthermore, even Async* falls short compared to the BP-based baseline. Especially, given the issues of point 1) above, it is not evident that AsyncFGD has useful merits or under which conditions it would be a better choice than competing methods. 6) In Section 6.5, the memory footprint comparison between FGD and AsyncFGD is missing in Fig. 3a and b - how do they compare? Importantly, how does the selection of *K* affect the memory footprint? Having more parallel workers can have a significant impact on memory consumption, through thread bookkeeping and cache misses. 7) As specified in the Appendix, only a single random seed is used in the experiments. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: 1) Is something accidentally missing from Section 6.1? 2) Please also see points 5) and 6) above for things that were not clear. 3) Please proof-read the paper as there are some typos. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: The paper does not explicitly discuss its limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer bhPv, Thank you for your comprehensive review and insightful comments. We have carefully considered your feedback and offer the following responses: **Weaknesses:** **Q1:** "Does idle worker really exists when training on Edge device? If so, why not batching inputs to parallel computation among workers ?" **A1:** Your insightful question touches on two critical dimensions of efficient training on edge devices: the existence of idle workers and the potential for parallel computation through batching. Let's delve into these aspects: 1. **Existence of Idle Workers:** Our method, although not a one-size-fits-all solution, can specifically addresses scenarios where edge devices, despite unused computational power, face memory constraints with BP-based algorithms, i.e. mismatch for parallisim and memory consumption. Specifically, temporal models like RNNs optimized by BPTT or SNNs optimized by Spatio-temporal backpropagation (STBP), can be highly memory-intensive. On edge devices, this often leads to reaching memory limits before fully utilizing all computational resources, resulting in idle workers, for example, on AGX Orin, which is employed in our experiment, its typical when RNN or SNN dealing inputs with more than 300 timestamps, the overall utilization of computing power only takes 40% without any spare memory to further utilize the idle workers. As emphasized in [1], the substantial memory demands pose a challenge in implementing RNNs on edge devices. AsyncFGD, our solution, employs Forward Gradient to relax memory constraints, thus fully utilizing otherwise idle workers. 2. **Batch Processing Considerations**: We concur with your insights on the merits of batch processing. While it is effective for historical data, there exist real-time scenarios, especially when edge devices are interfaced with specific sensors, that necessitate sequential data processing: - **Real-time Adaptation and Streaming Data Learning**: Edge devices might need to swiftly adapt to and learn from newly acquired data in real-time fashion [2]. Moreover, when processing sequential information with temporal connections, as collected directly by sensors [3, 4], batching could interfere with these inherent relationships, disrupting the learning process [5]. - **Data Privacy and Security**: For applications dealing with sensitive data, storing it for batching could introduce privacy and security vulnerabilities. Immediate processing, followed by data disposal, can mitigate these risks. - **Energy Efficiency**: Given the energy constraints typical of edge devices, batching can sometimes be more energy-intensive due to the demands of memory storage and data retrieval. In contrast, processing data as it arrives can be more energy-efficient in specific scenarios. In summary, we view our method as a viable solution for specific situations, particularly when edge devices face memory constraints with BP-based algorithms. However, we acknowledge that it may not be suitable for all scenarios. Your valuable feedback is instrumental in refining our approach for the revised manuscript. **Q2:** "Claims about Attainable Speedup ignore certain considerations." **A2:** We recognize your feedback on our speedup claims, especially in the context of bandwidth and computing power bottlenecks. In our experiments, we demonstrated positive results on AGX Orin, an edge device used in applications such as autonomous driving, robotics, and drones [7]. AGX Orin's off-chip bandwidth supports enough throughput, and its on-chip bandwidth facilitates communication between different workers on the chip. These features align with our AsyncFGD method, allowing for maximum utilization of computing resources and achieving acceleration. Given the broad definition and varied capabilities of edge devices, we acknowledge that our method might not achieve speedup on IoT-targeted chips, where bandwidth and computing power are restricted to even deploy a deep learning model. However, we believe our method has its merits in certain contexts, especially considering the rapid advancements in edge device's bandwidth, as evidenced by High Bandwidth Memory FPGAs. In light of your practical considerations, we will refine our claims and narrow the scope of our research in the revised manuscript. Your insights have been instrumental, and we appreciate your constructive feedback. **Q3:** "How is $K$ selected, considering model architecture, dataset, and platform?" **A3:** We appreciate your inquiry about the selection of $K$. Our extra experiments, detailed in Table 3 of the rebuttal PDF, confirm that as $K$ increases, the overheads can outweigh the benefits, leading to a plateau or even slowdown. This often happens when $K > 4$. Thus, the choice of $K$ requires careful consideration of the model's complexity, data characteristics, and platform constraints and we will add more experiments in the revised manuscript. Thank for your observation that has guided us to refine our work. **Q4:** Section 6.2's simple, arbitrary network architectures hinder AsyncFGD conclusions. **A4:** We apologize for any confusion. For comparisons with other BP-free algorithms, we intentionally used simpler network architectures in Section 6.3, as architectures used in the original papers are rather simple (for example, 2$\times$ 1000 FC layers in DFA [8] and two convolutional layers with 64 and 256 channels with $3 \times 3$ kerne in DRTP [9]). For efficient transfer learning in Table 2, and Fig 3(c), we selected widely-accepted lightweight models to represent a broader range of efficient models suitable for edge devices. We have reached the page limit. Please refer to the global author rebuttal for the remainng. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their detailed response. Please make sure to integrate your answers to the main paper, so that its scope is more clearly defined. A few further comments: **A1.1:** While it is true that memory-bound workloads (*e.g.* RNNs) can leave the computational resources underutilised due to the lack of enough main memory bandwidth, for the computational resources to perform another piece of work, unless there is data reuse in place, data still have to be transferred from the main memory to the processor. **A1.2:** Batch processing can also be applied in real-time applications, depending on the sensor's data generation rate and the processing rate of the system. For instance, for a camera with a capture rate of 60 FPS and an application that requires 30 FPS for good user experience, the processing system could potentially use a batch size of 2. --- Reply to Comment 1.1.1: Comment: Dear Reviewer bhPv, Thank you for recognizing our efforts and for your insightful comments. We truly appreciate your engagement with our work, and we would like to further clarify some points to address your concerns and any misunderstandings from our previous rebuttal. **For A1.1:** We apologize for any misunderstanding in our previous communication. When referring to 'limited memory' in A1.1, we were highlighting the constraints on memory capacity, not the bandwidth. Indeed, there can be a mismatch between bandwidth and capacity, as evidenced by Intel® Stratix® 10 HBM2 DRAM with 256GB/sec bandwidth but only 4GB or 8GB density, or the 25.6GB/sec 4GB memory on Nvidia Jetson Nano. This is particularly relevant for edge devices with deep learning accelerators, such as Nvidia Jetson Orin NX, which offers 8GB capacity with a theoretical peak memory bandwidth of 102GB/sec. In such scenarios, an Out Of Memory (OOM) problem may arise when optimizing a sequential model, leaving workers and memory bandwidth underutilized. Our AsyncFGD, by utilizing forward gradient and relaxing constraints on capacity, enables full utilization of potential workers and bandwidth. While a perfect alignment of memory capacity, bandwidth, and computing power may limit AsyncFGD's application, mismatches often occur. Specifically, when memory consumption for optimizing a model breaks the balance, our method shines by discarding intermediate variables during the forward pass, achieving relatively low memory consumption and full utilization of the other two components. **For A1.2:** You are correct in noting the use of batch processing in real-time scenarios. As a complement to your observation, we'd like to highlight that many applications require the preservation of temporal connections in data collected by sensors, where batching could be less suitable. For example, **real-time anomaly detection** in fields like industrial monitoring, healthcare, and security systems often need to take special care for temporal connection and utilize sequential models on edge device. Examples include a multi-scale convolutional recurrent encoder-decoder in [1] or LSTM in [2]. Similarly, **temporal pattern recognition**, where patterns emerge from both spatial and temporal information, also requires take care of temproal connection from data collected by sensors. Human activity recognition from spatio-temporal features [3] or driving behavior recognition using smartphone sensor data [4] are cases where temporal connections along the time dimension are vital. Additonally, **time-series forecasting**, in field like weather forecasting, stock market prediction, energy consumption prediction, utilizing temporal connections in sensor data for predictive modeling and forecasting future values [5, 6]. In these scenarios, batching consecutive 'frames' and sending them to different workers may not be suitable, as it could disturb the temporal connection. Our method is designed to handle such sequential data naturally with a pipeline fasion, preserving its temporal connection while achieving acceleration, as detailed in the Appendix of our original paper. We sincerely thank you again for helping us identify more clearly the scenarios where our algorithm can be applied. Your feedback has been invaluable in refining our work, and we look forward to any further insights you may have. Warm regards [1] Zhang C, Song D, Chen Y, et al. A deep neural network for unsupervised anomaly detection and diagnosis in multivariate time series data[C]//Proceedings of the AAAI conference on artificial intelligence. 2019, 33(01): 1409-1416. [2] Sivapalan G, Nundy K K, Dev S, et al. ANNet: a lightweight neural network for ECG anomaly detection in IoT edge sensors[J]. IEEE Transactions on Biomedical Circuits and Systems, 2022, 16(1): 24-35. [3] Medina M Á L, Espinilla M, Paggeti C, et al. Activity recognition for iot devices using fuzzy spatio-temporal features as environmental sensor fusion[J]. Sensors (Basel, Switzerland), 2019, 19(16). [4] Zhang J, Wu Z, Li F, et al. Attention-based convolutional and recurrent neural networks for driving behavior recognition using smartphone sensor data[J]. IEEE Access, 2019, 7: 148031-148046. [5] Zhang Y F, Fitch P, Thorburn P J. Predicting the trend of dissolved oxygen based on the kPCA-RNN model[J]. Water, 2020, 12(2): 585. [6] Koppe G, Guloksuz S, Reininghaus U, et al. Recurrent neural networks in mobile sampling and intervention[J]. Schizophrenia bulletin, 2019, 45(2): 272-276.
Summary: ThIS paper introduces AsyncFGD, a forward-only based training method. The key idea behidn this paper is integrating asynchronous updates with Forward Gradient Descent (FGD). The authors test their method on several small-scale datasets. Strengths: - The proposed proof of the convergence guarantee of AsyncFGD-SGD is new. - According to the experiment results, this framework achieves a good acceleration rate on small-scale datasets. Weaknesses: - The novelty of this approach is not introduced clearly. There is no fundamental difference between the original forward-only training methods by introducing a pipeline mechanism among different layers and this methods. - The experimental results are only shown on small-scale datasets. - The comparison with other works is not sufficient [1]. - The proposed method has on-par memory consumption with BP based training yet with lower accuracy. The memory cost is crucial for some edge devices. [1] Krithivasan S, Sen S, Venkataramani S, et al. Accelerating DNN Training Through Selective Localized Learning[J]. Frontiers in Neuroscience, 2022, 15: 759807. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - Can the author explain the fundamental difference between the original forward-only training methods by introducing a pipeline mechanism among different layers and this methods? - Can the author show the large-scale dataset experiments? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Please refer to the weakness sections. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer yH2P, Thank you for your comprehensive review and the valuable insights you provided. We genuinely appreciate your feedback, as it offers us a clear direction for refining our work. We have taken the time to address each of your concerns in detail. **Weaknesses:** **Q1:** "Unclear novelty and distinction from original forward-only training Methods with pjipeline parallelism." **A1:** Thank you for your insightful question. Our method's novelty lies in its design to detach dependencies in each module during the forward pass, most importantly, with theoretical grantee and performance comparable to the original version. Specifically: - Dependencies among layers are preserved within each iteration. - The staleness of each module is bounded on a module-wise basis. Our approach's primary goal is to ease the dependency in the forward pass from an iterative optimization standpoint. Supported by our mathematical proof, it converges to critical points under non-convex conditions. Though the asynchronous pipeline parallelism that results from this relaxed constraint may seem similar to merely adding pipeline parallelism to forward gradient, it stems from a fundamentally different motivation and design philosophy. Moreover, it is worth noting that our method is the first to provide asynchronous acceleration on FGD. **Q2:** "Experimental results focus on small-scale datasets." **A2:** We recognize your point regarding the scale of our datasets. To address this, we have conducted additional experiments on ImageNet, ILSVRC2012 datasets. While training the model from scratch led to a gap in accuracy compared to BP, we found that in online training scenarios, efficient transfer learning strategy can be employed to reduced this difference. As detailed in Table 2 of the rebuttal PDF, our AsyncFGD method, when combined with this strategy (Async$^*$), outperformed other BP-free algorithms on these larger datasets, approaching closer to BP's accuracy. **Q3:** "Insufficient comparison with other works like [1]." **A3:** We appreciate your emphasis on a more comprehensive comparison with other works, especially the one mentioned [1]. In response, we conducted further experiments comparing the two methods on ResNet-18 with CIFAR-10 and MNIST. Our results, which are detailed below, reveal that the maximum memory consumption of [1] aligns with BP. However, a direct comparison of accuracy is not entirely equitable, as [1] employs a hybrid approach that combines Forward-only and backpropagation methods. To provide a more nuanced comparison, we also adjusted the ratio of BP-pretrained weights (counted from the front) and presented the findings in the ratio column. This approach ensures a more balanced evaluation, reflecting that our method can achieve comparable accuracy with less memory consumption. *Accuracy and Memory Consumption on CIFAR-10* | ALGORITHM | MAX MOMORY CONSUMPTION | RATIO | ACC | | :----------: | :--------------------: | :---: | :--: | | BP | 3128 | - | 93 | | LoCal+SGD | 3128 | - | 92.8 | | FGD | 1125 | 0.0 | 45.7 | | Async (k=3) | 1187 | 0.0 | 44.8 | | Async (k=3) | 1387 | 0.1 | 48.3 | | Async (k=3) | 1754 | 0.3 | 64.1 | | Async (k=3) | 2247 | 0.5 | 88.3 | | Async (k=3) | 2849 | 0.8 | 92.4 | *Accuracy and Memory Consumption on MNIST* | ALGORITHM | MAX MOMORY CONSUMPTION | RATIO | ACC | | :---------: | :--------------------: | :---: | :--: | | BP | 3120 | - | 98.5 | | LoCal+SGD | 3120 | - | 98.3 | | FGD | 1120 | 0.1 | 63.2 | | Async (k=3) | 1179 | 0.0 | 65.4 | | Async (k=3) | 1347 | 0.1 | 69.3 | | Async (k=3) | 1788 | 0.3 | 84.9 | | Async (k=3) | 2043 | 0.5 | 97.3 | | Async (k=3) | 2831 | 0.8 | 98.4 | **Q4:** "The proposed method has on-par memory consumption with BP based training yet with lower accuracy. The memory cost is crucial for some edge devices." **A4**: We appreciate your observation regarding the memory consumption of our method compared to BP-based training. Indeed, the gap between BP and AsyncFGD narrows when BP-based methods utilize re-materialization techniques [2, 3], in MLP or RNN-based networks. However, as detailed in Figure 1 of the rebuttal PDF and the Appendix in the original paper, it's essential to highlight that AsyncFGD presents a distinct advantage in convolutional layers, which often constitute the majority of computation in networks designed for edge devices. This advantage arises from the intrinsic duplication of the kernel matrix during the forward pass and the fact that the primary memory consumption in AsyncFGD is for placeholders of random tangents of parameters. These factors contribute to the efficiency of our method in convolutional layers. This makes AsyncFGD particularly suitable for edge devices, where memory cost is a crucial consideration. **Questions:** Please refer to weaknesses part. We trust that our detailed responses offer clarity on the concerns you raised. Once again, we extend our gratitude for your constructive feedback. Warm regards [1] Krithivasan S, Sen S, Venkataramani S, et al. Accelerating DNN Training Through Selective Localized Learning[J]. Frontiers in Neuroscience, 2022, 15: 759807. [2] Tianqi Chen, Bing Xu, Chiyuan Zhang, and Carlos Guestrin. Training deep nets with sublinear memory cost. arXiv preprint arXiv:1604.06174, 2016. [3] Audrunas Gruslys, Rémi Munos, Ivo Danihelka, Marc Lanctot, and Alex Graves. Memory-efficient backpropagation through time. Advances in Neural Information Processing Systems, 29, 2016. --- Rebuttal 2: Comment: Dear Reviewer yH2P, Thank you for your thoughtful review and the time you've invested in our work. We noticed a positive adjustment in your assessment, and we're truly appreciative of your reconsideration. If there are any further areas where you believe we could refine our work, please feel free to share your thoughts. Your insights have been invaluable, and we look forward to any additional guidance you may provide. Best regards
Summary: This paper proposes an asynchronous version of the forward gradient descent (FGD) method, in order to alleviate the forward locking exhibited with FGD and enable more efficient implementation. Specifically, AsyncFGD decomposes a network into K decoupled subnetworks that work as a delayed pipeline and enable asynchronous forward propagation. The authors benchmark their method on an NVIDIA AGX Orin, using a variety of network architectures and datasets. Their method seems to perform better in the fine-tuning setup, where learning rates are small, due to the high variance of forward gradient methods in general. Strengths: - The paper is well motivated, and tackles an important subject: efficient on-device training - The authors properly explain their proposed method, the presentation is clear - The theoretical analysis on the convergence of the proposed method is appreciated - The authors benchmark their method on actual device and report measured performance Weaknesses: - The original FGD paper [1] does not report improvements in memory consumption (see Fig 6). I find it interesting that the proposed method does not suffer from the same. I also found that the overall memory discussion a bit lack-luster. It would greatly improve the quality of this paper, and emphasizes the importance of the proposed method, if some details were provided on how memory consumption is computed (or measured?) for all methods (BP, FGD, and AsyncFGD). How bad is the additional memory overhead due to the proposed asynchronous implementation? - The method has not been benchmarked on big datasets, such as ImageNet. Furthermore, it appears that, much like other forward methods, AsyncFGD cannot scale well to larger problems. This explains why the authors focused on transfer learning applications. This, in my opinion, limits the usefulness of such methods. [1] https://arxiv.org/pdf/2202.08587.pdf Technical Quality: 3 good Clarity: 3 good Questions for Authors: - typo in line 104: define - What is the dataset being evaluated in Fig 3c? - Why isn't the efficient learning strategy applied to FGD in Table 2? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: No discussion on limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer Kesv, Firstly, we would like to express our gratitude for the meticulous review and valuable feedback you provided on our paper. Your insights are instrumental in refining our work, and we have made concerted efforts to address each of the concerns you highlighted. **Weaknesses:** **Q1:**: "Why AsyncFGD's memory consumptino is better than that of the origianl paper, how is the memory mesured or computed?" **A1**: We appreciate you noting the difference in memory consumption between our paper and the original FGD paper [1]. A potential explanation is that the original paper relied on an early beta version of functorch, which may have led to inaccurate measurements. This could explain the perfect alignment of their BP and FGD memory lines. In our study, we used PyTorch's built-in profiler to precisely measure peak memory allocation during optimization. This allowed us to obtain more reliable memory consumption results. Theoretical computation for memory consumption will also be provided in the revised manuscript. **Q2:** "Worry about scalability to larger datasets." **A2:** We acknowledge your valid concerns regarding the applicability of AsyncFGD to larger datasets, such as ImageNet. While our initial focus was on specific use cases, we have broadened our experiments to encompass ImageNet in response to your feedback. These findings are detailed in Table 2 of the rebuttal PDF. We recognize a performance gap between AsyncFGD and BP, particularly due to the variance introduced by random perturbation. However, this challenge can be mitigated in online training and adaptive learning scenarios. In such contexts, edge devices initialized with pre-trained weights need to further adapt to different conditions, like auto-driving models, where devices must adapt to specific road conditions using existing knowledge from previously learned experiences, making our method more applicable. Moreover, our explorations into variance reduction with efficient transfer learning are positive. We are also buoyed by parallel efforts of variance reduction for FGD for large scale model and datasets[2]. **Questions:** 1. **Typo in Line 104**: We apologize for the oversight and have corrected the typo in line 104. We appreciate your attention to details. 2. **Unclear Dataset in Fig 3c**: The dataset evaluated in Fig 3c is CIFAR-10. We recognize that this have not been explicitly mentioned in the original manuscript, and we assure that the necessary clarifications will be incorporated in our revised version. 3. **Efficient Learning Strategy in Table 2**: We apologize for any ambiguity caused by the presentation in Table 2. Our primary objective was to illustrate that AsyncFGD can achieve performance metrics comparable to FGD in transfer learning tasks, even when working with stale parameters. The introduction of the AsyncFGD* column aimed to juxtapose it with AsyncFGD, emphasizing that when paired with efficient transfer learning, our method's accuracy can be further bolstered due to diminished variance. Hence, we did not apply the efficient transfer learning strategy to FGD. In conclusion, we genuinely hope that our clarifications address the concerns you raised. We remain committed to refining our work based on your invaluable feedback and are confident that our revisions will elevate the quality of our paper. Once again, we extend our heartfelt thanks for your constructive insights. Warm regards [1] [https://arxiv.org/pdf/2202.08587.pdf](https://arxiv.org/pdf/2202.08587.pdf) [2] Ren M, Kornblith S, Liao R, et al. Scaling forward gradient with local losses[J]. arXiv preprint arXiv:2210.03310, 2022. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. Regarding the memory footprint, I believe both the measured and theoretical memory consumption of the proposed method (and the BP baseline) should be discussed to have a more interesting and reliable discussion. This will greatly improve the quality of the manuscript. Regarding the scalability issue, it still seems that the proposed method falls short compared to BP. This should be highlighted as a potential limitation of the work. --- Reply to Comment 1.1.1: Title: Many thanks Comment: Dear Reviewer, We sincerely appreciate your thoughtful insights regarding memory footprint and scalability. Your suggestion to explore both measured and theoretical memory consumption will undoubtedly enrich the manuscript. We also recognize the importance of clearly highlighting the scalability issue as a potential limitation. Rest assured, these valuable points will be carefully addressed in our revisions. With heartfelt thanks --- Rebuttal 2: Comment: Dear Reviewer Kesv, We hope our rebuttal has successfully addressed your concerns. Your insights have been instrumental in enhancing our work, and we are truly grateful for your thoughtful review. If you have any further concerns or suggestions, please know that we welcome them wholeheartedly. As the deadline for the author-reviewer discussion is approaching, we would appreciate any additional feedback you may have at your earliest convenience. Thank you once again for your valuable contribution to our research. We look forward to hearing from you. Best regards --- Rebuttal Comment 2.1: Comment: Dear Reviewer Kesv, Thank you for your insightful and valuable comments. As we approach the final stages of author-reviewer discussion, we want to ensure that our responses have fully addressed your concerns. Your expertise has greatly contributed to our work, and we are committed to incorporating your suggestions in the best possible manner. If there are any points that still require clarification or further refinement, please don't hesitate to let us know. Your continued guidance is deeply appreciated. Once again, thank you for your dedication, assistance, and the positive impact you have made on our research. With sincere gratitude
Summary: This paper proposes a novel forward gradient descent method named AsyncFGD to decouple dependencies between layers and thus maximize parallel computation. The authors demonstrate that their method can reduce memory consumption and enhance hardware efficiency through empirical evaluations on AGX Orin. Strengths: The proposed method of using asynchronized forward gradient descent is interesting and effective, and the authors provide both theoretical analysis and empirical verification, where the latter includes both accuracy and efficiency. The authors also provided extensive analysis and detailed setup. Weaknesses: One potential issue is that it is not very clear if this work is very suitable for a machine learning venue, as a large portion of the paper are talking about architecture- or system-level issues. Also, from Table 1, it seems the proposed method is not able to achieve consistently better results,like the sDFA or DFA can be better for MNIST, among others. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Could this method also improve the energy efficiency? Could there be some comparison, even if basic analysis? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: This paper did not talk about limitations. One potential issue might be related to the environment issue, like carbon emission of the proposed method compared to conventional ones. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer empf, We sincerely thank you for your comprehensive review and the constructive feedback you provided. Your insights are invaluable, and we have made efforts to address each of the concerns and questions you raised. **Weaknesses:** **Q1:** "Unclear Relevance to Machine Learning Venue." **A1:** You highlighted that a significant portion of our paper focuses on architecture- or system-level issues, which might seem tangential to a machine learning venue. We understand this perspective. However, the system we are exploring possesses unique properties intrinsically tied to Machine Learning. For instance, the layer-wise dependency in each iteration and the iteratively based workload are not typical of arbitrary system processes. Moreover, the strategies we employed to loosen the dependencies among workers are fundamentally rooted in ensuring the convergence of a Machine Learning Algorithm, as evidenced by our comprehensive proof, thus also indicating that we are trying to acclerate FGD from a more algorithmetic perspective. Nevertheless, we acknowledge your feedback and will strive to align our discussions more closely with the machine learning context in the revised manuscript. **Q2:** "Inconsistency of Advantageous Results of AsyncFGD in Table 1." **A2:** We value your observation of the advantage of AsyncFGD over other algorithms. To clarity, the target of training on edge device typically not in a train-from-strach, but a online learning fashion, with previous experince and incoming samples from conditions where the device operates. Therefore, our initial Table 1 results may not fully capture AsyncFGD's strengths. Additional experiments simulating online learning offer a more fitting comparison, demonstrating that AsyncFGD's unbiased gradient estimation surpasses other BP-free algorithms relying on random feedback weights. **Questions:** 1. **Could this method also improve the energy efficiency? Could there be some comparison, even if basic analysis?** You raised an insightful query regarding our method's potential to enhance energy efficiency. We believe that online learning on edge devices, where a combination of previous experience (weights) and incoming samples are processed, is an important implementation scenario for both on-device learning and our AsyncFGD. In such a context, our method offers efficiency by requiring fewer computations per input sample. Specifically, AsyncFGD needs only 2 GEMM (General Matrix Multiply) operations per layer—one for the forward pass and the other for the Jacobian-vector product (JVP)—compared to 3 for BP. This reduction in total GEMMs per input sample could translate into potential energy savings. We recognize the importance of this aspect and will include a more detailed analysis, along with benchmarks on different platforms, in the revised version of our paper. 2. **One potential issue might be related to the environment issue, like carbon emission of the proposed method compared to conventional ones.** You raise an excellent point about potential environmental impacts like carbon emissions. Analyzing this aspect is indeed complex, as many factors contribute to the overall environmental footprint. On one hand, AsyncFGD might produce more emissions due to its emphasis on maximizing worker utilization, extra context switching, and random tangent regeneration. On the other hand, leaving workers idle and prolonging training can also waste energy, contributing to extra emissions. Thus, there exists a nuanced tradeoff between emissions from longer training time and emissions from additional computations. To provide a comprehensive understanding, we will conduct benchmarks to compare energy consumption and emissions for AsyncFGD against conventional methods across various scenarios. This analysis will enable us to assess the full environmental impact of our method. We appreciate your valuable suggestion to examine this critical aspect, and we are committed to addressing it in our revised manuscript. We trust that our responses address the concerns and questions you raised. We are committed to refining our work based on your feedback, confident that the revisions will bolster the quality of our paper. Once again, we extend our gratitude for your constructive comments. Warm regards
Rebuttal 1: Rebuttal: ### Sincere Gratitude Dear All Reviewers and Program Chairs of NeurIPS 2023, We would like to extend our heartfelt gratitude to each of you for the time, effort, and expertise you have invested in reviewing our manuscript. Your constructive feedback, insightful comments, and valuable suggestions have been instrumental in identifying areas for improvement and refinement. We have taken all your comments into careful consideration and have made corresponding revisions to address the concerns and enhance the quality of our work. We believe that these changes not only clarify our contributions but also strengthen the overall impact and relevance of our research. We sincerely appreciate the opportunity to engage in this collaborative process and look forward to any further feedback you may have. Your dedication to maintaining the rigor and integrity of the scientific process is truly commendable, and we are honored to be part of this scholarly community. Thank you once again for your thoughtful review. Warm regards ### Remaining Rebuttal #### Remaining Rebuttal for Reviewer bhPv: **Q5:** "Section 6.4 unclear for AsyncFGD* accuracy, Async* falls short of BP-baseline." **A5**: The enhanced accuracy of AsyncFGD* is a result of our efficient transfer learning strategy, which effectively reduces variance. Historically, we've observed that FGD suffer more from variance than optimizing on the subset of parameters. While our method's accuracy might trail BP, we have acknowledged this limitation in our conclusion. We are also optimistic about ongoing efforts to further reduce the variance of forward gradient [6]. **Q6:**: "Section 6.5 lacks memory footprint comparison; how does K affect it?" **A6:** In response to your query, we conducted further experiments. As detailed in Table 4 of the rebuttal PDF, our findings reveal a 30% increase in memory usage when $K=4$. These results emphasize the importance of carefully selecting the value of $K$, considering both memory constraints and performance goals. Please refer to the rebuttal PDF for a comprehensive overview. **Questions:** 1. **Missing Content in Section 6.1:** Our apologies for the oversight. We inadvertently added a redundant subsection 6.1. 2. **Clarifications on Sections 6.5 and 6.6**: We appreciate your questions regarding Sections 6.5 and 6.6. We have answered them above. 3. **Proofreading and Typos**: We apologize for any oversight in proofreading. We have taken your feedback into account and have thoroughly reviewed the paper to correct any typos. We hope our responses address your concerns adequately. We're committed to refining our work based on your feedback and believe that the revisions will enhance the paper's quality. Once again, thank you for your constructive comments. Warm regards [1] Rezk N M, Nordström T, Ul-Abdin Z. Shrink and eliminate: A study of post-training quantization and repeated operations elimination in RNN models[J]. Information, 2022, 13(4): 176. [2] Pellegrini L, Lomonaco V, Graffieti G, et al. Continual learning at the edge: Real-time training on smartphone devices[J]. arXiv preprint arXiv:2105.13127, 2021. [3] Hagenaars J, Paredes-Vallés F, De Croon G. Self-supervised learning of event-based optical flow with spiking neural networks[J]. Advances in Neural Information Processing Systems, 2021, 34: 7167-7179. [4] Schaefer S, Gehrig D, Scaramuzza D. Aegnn: Asynchronous event-based graph neural networks[C]. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 12371-12381. [5] Pan Y, Cheng C A, Saigol K, et al. Agile autonomous driving using end-to-end deep imitation learning[J]. arXiv preprint arXiv:1709.07174, 2017. [6] Ren M, Kornblith S, Liao R, et al. Scaling forward gradient with local losses[J]. arXiv preprint arXiv:2210.03310, 2022. [7] [DRIVE Hyperion Autonomous Vehicle Development Platform | NVIDIA Developer](https://developer.nvidia.com/drive/hyperion) [8] Nøkland A. Direct feedback alignment provides learning in deep neural networks[J]. Advances in neural information processing systems, 2016, 29. [9] Frenkel C, Lefebvre M, Bol D. Learning without feedback: Fixed random learning signals allow for feedforward training of deep neural networks[J]. Frontiers in neuroscience, 2021, 15: 629892. ### Additional PDF We have included a PDF file containing the majority of the results from our additional experiments. This supplementary material offers an extended view that contributes to a more comprehensive understanding of our research. Pdf: /pdf/e14d64d5dfccf57a698db3e3f1540aa31a7c320f.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
A Unified Framework for U-Net Design and Analysis
Accept (poster)
Summary: This paper proposes a formal definition of U-nets, a crucial building block of modern deep learning pipelines such as diffusion models. Thanks to this definition it is possible to both get theoretical results explaining some of the u-nets behaviours, and generalize u-nets to settings more exotic than the 2D images. Experiments are then conducted on various modalities to validate the theory. Strengths: - I agree with the main selling point of the paper : U-Nets have been under studied, especially in the context of diffusion models, and it is more than necessary to put an end to that. - I think the formalism effort presented in this work is great, and it was indeed, to the best of my knowledge, lacking. - the diversity of the modalities studied showcases the potential impact of better understanding U-Nets Weaknesses: - **inductive bias**: theorem 2 is the cornerstone of the theoretical explanation of this work as to why U-Nets are effective in diffusion. Yet I think that it has 2 problems. First, the result is about the Haar decomposition of the diffusion process, and therefore it applies to all models that have a multiscale structure with average pooling, not specifically U-Nets. Second, and in my view much more problematic, diffusion pipelines often use an "epsilon-type" prediction framework, in which the role of the U-Net is not to predict the denoised image (or signal in general), but rather the noise. There are multiple examples of this that can be found either in the examples of the diffusers library of HuggingFace or in the seminal papers of Ho and Song: - in the unconditional generation example: https://github.com/huggingface/diffusers/blob/main/examples/unconditional_image_generation/train_unconditional.py#L576, when prediction type is "epsilon", the loss is computed as the MSE between the output of the model and the noise, and one can see that this is the default mode (L229), not changed in the way the script is called (https://github.com/huggingface/diffusers/blob/main/examples/unconditional_image_generation/README.md) - in the text-to-image generation example, the prediction type is taken by default from the noise scheduler which has it by default has "epsilon" (https://github.com/huggingface/diffusers/blob/716286f19ddd9eb417113e064b538706884c8e73/src/diffusers/schedulers/scheduling_ddpm.py#L121), not changed in the way the script is called (https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/README.md) - In [7], algorithm 1 clearly shows that it's the noise that is learned - In the implementation of [34], the noise `z` is used used in the MSE with the model output (https://github.com/yang-song/score_sde/blob/main/losses.py#L115) As a consequence I think the theoretical result does not cover the practice. Indeed, it is said "This inductive bias enables the encoder and decoder networks to focus on the signal on a low enough frequency which is not dominated by noise." while the network's role is precisely to focus on subspaces which feature the noise in order to return it. While I understand that some pipelines still use a "sample" type prediction, I think that the understanding of the U-Net inductive bias is not understood if it doesn't take into account both types of predictions, especially if the explanation of one type is contrarian to the other type. - **clarity and space use**: I think that some sections of the paper are very unclear, in part because some essential parts were left out in the appendix. For example, Algorithm 1 for staged training should be in the core paper. Similarly the practical construction of a U-Net that guarantees boundary conditions should be in the core paper. For now, we just get an idea of what the decoder subspaces should look like, but not how the decoder should be designed in order to achieve these subspaces. Right now it is not clear to me how these decoders are designed. A last example (although there are some more) is that the formal definition of the U-Net is hard to grasp at first read. There should be a correspondence between the elements introduced and the typical 2D image segmentation U-Net architecture. - **prior work comparison/mention**: I noticed two parts that needed more references to prior work/comparison to it. First, the connection between U-Nets and wavelets has been a topic of many works among which [A, B]. Second, diffusion on complicated geometries including manifolds has been recently covered by works such as [C-F]. For both of these, mentioning as well as highlighting the differences would be super important. *Minor*: - To me preconditioning means finding a matrix close to the Hessian inverse in an optimization procedure in order to better take into account the geometry of the problem when optimizing, not finding a good initialization. Here it seems that preconditioning is more used to describe a good initialization rather than taking into account geometric information. - "It also explains why in practice, encoders are often chosen significantly smaller than the decoder [26]". I am not sure about this fact, so I think it needs much more evidence (see for example in the diffusers library https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unet_2d.py#L198 the decoder only has one more layer than the encoder). Also [26] is about VAEs, not U-Nets. [A]: Ramzi, Z., Michalewicz, K., Starck, JL. et al. Wavelets in the Deep Learning Era. J Math Imaging Vis 65, 240–251 (2023). [B]: Liu, P., Zhang, H., Zhang, K., Lin, L., & Zuo, W. (2018). Multi-level wavelet-CNN for image restoration. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops (pp. 773-782). [C]: De Bortoli, V., Mathieu, E., Hutchinson, M., Thornton, J., Teh, Y. W., & Doucet, A. (2022). Riemannian score-based generative modeling. arXiv preprint arXiv:2202.02763. [D]: Fishman, N., Klarner, L., De Bortoli, V., Mathieu, E., & Hutchinson, M. (2023). Diffusion Models for Constrained Domains. arXiv preprint arXiv:2304.05364. [E]: Huang, C. W., Aghajohari, M., Bose, J., Panangaden, P., & Courville, A. C. (2022). Riemannian diffusion models. Advances in Neural Information Processing Systems, 35, 2750-2761. [F]: Liu, L., Ren, Y., Lin, Z., & Zhao, Z. (2022). Pseudo numerical methods for diffusion models on manifolds. arXiv preprint arXiv:2202.09778. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - What is meant by self-similarity? - In the code, for MNIST experiment, a network called GNet is used rather than a U-Net : what is it and does it fit with the experiments reported in the paper? - When deriving Theorem 2, it seems that each subspace W_i is an image subspace. However, in practice the U-Net acts on feature spaces, i.e. spaces with much more feature channels than the image itself, how would you take that into account? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful and detailed review. We are thankful for the appreciation of the formalism of our theoretical framework which contributes to an understudied area, among other strengths. We respectfully would like to point out that we disagree with the concerns on Theorem 2 which seem to be the reviewer’s main reason for discontent. > “First, the result is about the Haar decomposition of the diffusion process, and therefore it applies to all models that have a multiscale structure with average pooling, not specifically U-Nets.” Theorem 2 analyses the effect of average pooling (wavelet projection [4]) on data under a forward diffusion process. It is worth noting that Theorem 2 does not require or mention the concept of a U-Net per se, but is consequential to the model design of diffusion models. We appreciate your insight, which highlights the theorem's broader applicability to various architectures leveraging a multiscale structure with average pooling. We don’t think this is a “problem” of Theorem 2, it rather shows its strength and generality beyond U-Nets. The reviewer’s description of "multiscale structure with average pooling" captures what many might view as a U-Net's essence. This is just a curious example that the lack of a formal definition in the community necessitates our introduction of Definition 1. This definition serves not just as a concise representation, but also facilitates the deep dive into U-Nets' characteristics, from their recursive patterns (2.1) and scaling laws (2.2) to their relationship with ResNets (2.3). > “Second, [...] diffusion pipelines often use an "epsilon-type" prediction framework, in which the role of the U-Net is not to predict the denoised image (or signal in general), but rather the noise. [...] As a consequence I think the theoretical result does not cover the practice.” We respectfully disagree with this main concern on Theorem 2, and would like to find an agreement with the reviewer on this point. First, we would like to note that both the “epsilon-type” (noise as output) and the “sample-type” (image as output) solve *the same task*, namely to separate signal from noise. While in both frameworks the network receives image and noise $Im + \epsilon$ as input, the “epsilon-type” framework achieves this task by subtracting the signal $Im$, while the “sample-type” framework subtracts the noise $\epsilon$. The fundamental quantity of interest which governs how challenging it is to solve this task of separating signal and noise (in both formulations) is the signal-to-noise ratio between $Im$ and $\epsilon$. In Theorem 2, we demonstrate that the high-frequency components of an image have their signal-to-noise ratio dominated by noise exponentially faster than the low-frequency components. Importantly, a U-Net architecture with average pooling is passing as input to the decoder the image plus noise at the highest resolution, and the low frequency details that are dominated by signal on lower resolutions. This means that apriori to training the network, we have fed as input the image plus noise, and a first approximation of the image without noise. As the network learns to separate noise from signal, these two inputs yield a useful inductive bias for the signal and noise separation task in light of Theorem 2. While not entirely certain, we believe the confusion might arise from the following argument. During average pooling, the high-resolution Haar wavelet spaces, which are dominated by noise (Theorem 2), get discarded. However, in the "epsilon-type" formulation, it may be unclear how the U-Net can now predict the noise, if ‘we just removed the noise’ via average pooling (more precisely, those subspaces dominated by noise). Should this indeed be the source of confusion, we can hopefully resolve this easily. The responsibility for ‘predicting the noise’ does not solely lie on the encoder. Instead, it's the decoder that uses the encoder's signal through skip connections. The encoder primarily functions as an ‘information compressor’ for the input data. The decoder utilises this condensed input to generate an output, which could be either the 'sample' or the 'epsilon' (noise). In either case, a lower-frequency, low-noise signal serves as valuable compressed information that aids the decoder in generating its output. > ”I noticed two parts that needed more references to prior work/comparison to it [...] mentioning as well as highlighting the differences would be super important.” U-Nets and Wavelets: We appreciate the reviewer's references, especially [A], and will include [A,B] in our discussion of Haar subspace U-Nets. While these works highlight wavelet-U-Net connections, our Definition 1 unifies diverse wavelet bases and generalises to other bases. Our empirical study of Multi-ResNets, replacing the U-Net's encoder with wavelet projections, adds novel insight. This demonstrates the broader connection between U-Nets and wavelets [5]. Diffusions over Complex Geometries: U-Nets achieve state-of-the-art results in learning diffusion models for image-based tasks [9]. Yet, their performance can waver when modelling functions on non-Euclidean spaces. Notably, recent research into diffusion models on Riemannian manifolds [C-F] has favoured simple, fully-connected networks over U-Nets or modifications thereof. Our unified framework bridges this gap by unveiling previously overlooked potential in neural architecture. Our work is hence complementary to [C-F], focussing on the neural architecture. It paves the way for U-Net adaptations tailored to complex geometries, including CW-complexes and manifolds by integrating data topology into the U-Net itself. This lays the foundation for neural architectures that suit diffusion models on manifolds. - - - Lastly, we refer to our response to all authors for a discussion of other points in this review. --- Rebuttal Comment 1.1: Title: Thanks for your answers Comment: Before answering, I would like to thank the authors for engaging in the review process in what I view as a very positive and intellectually honest manner. I think that aside from the 2nd point about theorem 2, I don’t have major blockers left, granted the improvements mentioned in the rebuttal are implemented (which I don’t think is a big issue). *1st point on Th. 2., about specificity*: But then how does this result relate to e.g. the one by Donoho in “Denoising by soft thresholding” (1995)? He also looked at the variance of the noise in wavelet coefficients. *2nd point on Th. 2., about epsilon prediction*: I think the clarification that the decoder can focus on the high frequency is important, and to me it sounds contradicting with what was written in the paper “This inductive bias enables the encoder and decoder networks to focus on the signal on a low enough frequency which is not dominated by noise.”, but maybe it’s because I am understanding the word “focus” in this context in the wrong way. Still I think this demands clarification. Further, my main take-away from this response is that the inductive bias of the U-net basically (and maybe I am simplifying too much here) is that it has the ability to provide “a lower-frequency, low-noise [version of the] signal”. I don’t understand then how we can understand that this provides a good inductive bias, i.e. is “valuable compressed information that aids the decoder in generating its output”. To me this is the crux of the matter, rather than showing that the U-net’s encoder can extract this information. I realize that my problem is not necessarily on Theorem 2., which I see as valid (not having checked the proof thoroughly but it matches what I have seen in the past in wavelet works), but rather on the interpretation made thereafter. In the abstract it says "In diffusion models, our framework enables us to identify that high- frequency information is dominated by noise exponentially faster, and show how U-Nets with average pooling exploit this.". While I agree that the first part of this sentence is backed up in the text, to me the second part is not as I explain above. --- Reply to Comment 1.1.1: Title: Reply to: 'Thanks for your answers' (1/3) Comment: We very much share the positive sentiment about this useful and enlightening discussion with the reviewer. We are thankful for the questions the reviewer is raising which allow us to improve and produce a clearer, more accurate explanation of our results. We are glad we could resolve the reviewer’s concerns in most points through the explanations and improvements provided, and would like to discuss the remaining questions below. We would be thankful if you could consider the outcome of this discussion towards the final recommendation and score of your review. > “1st point on Th. 2., about specificity: But then how does this result relate to e.g. the one by Donoho in “Denoising by soft thresholding” (1995)? He also looked at the variance of the noise in wavelet coefficients.” We appreciate the reviewer for highlighting this relevant article. We recognise a notable similarity between the message of our Theorem 2 and the analysis of coefficient decay in the pertinent Besov space in [Donoho1]. Under the forward diffusion process analysed in Theorem 2, the signal of our data predominantly concentrates on the low-frequency wavelet coefficients. In light of [Donoho1], this observation can also be articulated as: when examining the appropriate sequence of wavelet coefficients in a sequence space, the high-frequency coefficients, in expectation, tend to zero considerably faster than their low-frequency counterparts. Therefore, restating Theorem 2 in the context of the Besov space would indeed enhance clarity, especially in elucidating how this property can be generalised to other geometries, given that all analysis is anchored in the Besov space. We propose to incorporate this analysis into the appendix of our manuscript, with reference to the given article, complemented by a more generalised statement of Theorem 2 for users working with U-Nets in complicated geometries. Another intriguing idea would be to utilise this sequence space information from a specific dataset to define the noise schedule of a diffusion process and to determine the size of each decoder block based on these coefficients. While delving deeper into this would require additional research, we are confident that presenting this information in a straightforward manner could be an interesting extension towards better U-Net designs *tailored to individual datasets*. We believe that presenting Theorem 2 in this expanded manner, coupled with our general U-Net framework, marks an initial step towards such research. The crux here lies in understanding how this sequence space decays, because as demonstrated in [Donoho1], this can provide insights into our model's design. The reviewer may also be referring to the wavelet shrinkage procedure on page 3 first proposed in (Donoho, 1992). In step (2), wavelet coefficients are translated and then thresholded (the operator $(\cdot)_+$ is in our understanding the ReLU function) which effectively results in a ‘continuous shrinkage to zero’ of wavelet coefficients, where some coefficients are indeed set to zero. This is in contrast to the ‘hard projection’ in average pooling, where coefficients are either their identity or set to zero. It is also worth noting that this procedure makes the assumption that the reconstruction is at least as smooth as the ground-truth function that shall be approximated (see (1.3)), an assumption which Theorem 2 does not make. On a final note: [Donoho1] is a long paper with many results and we admit that it is slightly challenging at present. If we should take into consideration another specific result from this paper that we have not discussed above, we would appreciate it if the reviewer could point us to it. Lastly, we would like to point out that there are two versions of this paper. We carefully read the version cited below [Donoho1]. We thank the reviewer for this very interesting reference to relevant literature, and the useful discussion on it. [Donoho1] https://web.stanford.edu/dept/statistics/cgi-bin/donoho/wp-content/uploads/2018/08/denoiserelease3.pdf
Summary: This paper proposes a framework for designing and analyzing general UNet architectures. Theoretical results are presented that can characterize the role of encoder and decoder in UNet and conjugacy to ResNets is pointed out via preconditioning. Furthermore, this paper proposes Multi-ResNets, UNets with a simplified, wavelet-based encoder without learnable parameters. In addition, this paper presents how to design novel UNet architectures which can encode function constraints, natural bases, or the geometry of the data. Experiments of Multi-ResNets on image segmentation, PDE surrogate modeling, and generative modeling with diffusion model are conducted and demonstrate competitive performance compared to classical UNet. Strengths: - Provide the mathematical definition of UNet, which enable identifying self-similarity structure, high-resolution scaling limits, and conjugacy to ResNets via preconditioning. - Based on theoretical analysis, authors propose Multi-ResNets, a novel class of UNet with no learnable parameters in its encoder. - Multi-ResNets achieves competitive or superior results compared to a classical U-Net in PDE modeling, image segmentation, and generative modeling with diffusion models. Weaknesses: - The presentation of the paper is not smooth and clear, which needs significant improvement. For example, the authors claim to propose a unified framework for UNet design, but I don’t see how this model can help researchers design specific UNet for different tasks. It’s more like the authors improve UNet on three chosen tasks and provide some theoretical analysis. Please correct me if I misunderstood anything. - This paper did a poor evaluation of the PDE part. In the PDE surrogate modeling, UNet is not the state-of-the-art model (authors made a wrong and misleading claim about the performance of UNet in PDE modeling) and performs much worse than the Fourier Neural Operator (FNO) [1][2][3]. So I am not convinced that it is useful to only demonstrate the effectiveness of Multi-ResNets compared to UNet in PDE modeling. In addition, no other representative baselines are compared within the NS and shallow water experiments, such as FNO[1], MPPDE[4], FFNO[5], etc. [1] Li, Zongyi, et al. "Fourier neural operator for parametric partial differential equations." arXiv preprint arXiv:2010.08895 (2020).\ [2] Takamoto, Makoto, et al. "PDEBench: An extensive benchmark for scientific machine learning." Advances in Neural Information Processing Systems 35 (2022): 1596-1611.\ [3] Helwig, Jacob, et al. "Group Equivariant Fourier Neural Operators for Partial Differential Equations." arXiv preprint arXiv:2306.05697 (2023).\ [4] Brandstetter, Johannes, Daniel Worrall, and Max Welling. "Message passing neural PDE solvers." arXiv preprint arXiv:2202.03376 (2022).\ [5] Tran, Alasdair, et al. "Factorized fourier neural operators." arXiv preprint arXiv:2111.13802 (2021). Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please refer to the weakness part. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The limitations of this paper are well discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their review, and for highlighting various components and insights of our unified framework as strengths of our work. Below, we would like to focus on the two weaknesses the reviewer mentions. > “The presentation of the paper is not smooth and clear, which needs significant improvement.” Your feedback concerning the paper's clarity is invaluable, and we are committed to making necessary revisions. If you have further specific points or sections that you found unclear, we would greatly appreciate further guidance to target those areas effectively. > “[...] I don’t see how this model can help researchers design specific UNet for different tasks. It’s more like the authors improve UNet on three chosen tasks [...]. Please correct me if I misunderstood anything.” We value your recognition of our efforts in enhancing U-Nets for several tasks. Regarding your concern, our paper aims to unify the U-Net definition and component roles based on literature. The inclusion of Multi-ResNets is not for introducing a new U-Net architecture per se, but rather as a tool to analyse the encoder's role (Sec 3.1) and validate it empirically (Sec 5.1). This approach empowers researchers to make informed U-Net component choices, in contrast to empirical iteration practised. In Sec 3, we offer three examples of novel design choices within our framework that incorporate task-specific requirements and constraints into U-Nets. Among them are: ***U-Nets for complicated geometries:*** Our framework enables the design of U-Nets over complicated geometries beyond the square, for instance CW-complexes or manifolds. See “B. Diffusions over complicated geometries” in our response to reviewer 7MgH for a complete explanation. ***U-Nets with constraints:*** Suppose that we wish to build a U-Net which is guaranteed to output functions of a certain smoothness class and boundary condition. This could be for making a U-Net for PDE surrogate modelling, where we know a priori that the PDE has a weakly differentiable solution with nullified boundary conditions. Then for $W_i$ in our U-Net definition, we can choose a basis of triangular basis functions (Sec 3.2). We encode this within our neural network by selecting a layer of the $D_i$ to model the coefficients of this basis to a given dimension through what our framework and definition propose for selecting the approximation space which is desirable to the problem at hand. We would appreciate it if you could consider these two examples (alongside the Multi-ResNet) in response to your comment on designing task-specific U-Nets towards your final evaluation of our work. > “This paper did a poor evaluation of the PDE part. In the PDE surrogate modeling, UNet is not the state-of-the-art model (authors made a wrong and misleading claim about the performance of UNet in PDE modeling) and performs much worse than the Fourier Neural Operator (FNO) [1][2][3]. [...] In addition, no other representative baselines are compared within the NS and shallow water experiments, such as FNO[1], MPPDE[4], FFNO[5], etc.” It is important for us to stress that this paper is not about PDE surrogate modelling. This paper is about U-Nets, their design and analysis. In our experiments, we chose PDE surrogate modelling as one of three tasks, because U-Nets are a competitive model candidate for each of them, and often the go-to choice for practitioners. The goal of Sec 5.1 was to verify our theory on the role of the encoder, and indeed provide empirical evidence, including on PDE modelling, that parameters in the encoder may sometimes not be as useful as one might think, if the encoder spaces were chosen suitably—a non-trivial insight enabled by our theory. We believe it would be interesting to benchmark U-Nets and Multi-ResNets against further models such as the ones the reviewer mentions, possibly even on more datasets, yet believe this is beyond the scope of this paper. We respectfully disagree with the statement that we had made a “wrong and misleading claim about the performance of [the U-Net] in PDE modelling”. In our submission, we have neither claimed that U-Nets outperform FNO, nor have we claimed that U-Nets are ‘the best’ model for PDE modelling at present. Analysing this was not a goal of this paper. Our intent was to highlight U-Nets as a competitive architecture for the tasks at hand, not necessarily as the unequivocal best choice. We regret any misunderstanding that the term "state-of-the-art" may have caused. While sources like PDEBench [2] also reference U-Nets in this manner (“[...] with results for popular state-of-the-art ML models (FNO, U-Net, PINN) [...]” [2]), we'll revise the term to "competitive" in all occurences to prevent any misinterpretations. That said, we disagree with the reviewer's assessment of FNO's dominance over U-Nets. We believe references [1-3] do not support their claims of clear FNO superiority for various reasons, and we're happy to discuss these papers at length individually during the author-reviewer discussion period. We would further like to refer to the PDEArena benchmark [7], the perhaps most recent, large-scale PDE benchmark which reports U-Nets as competitive, sometimes clearly superior to FNO. In addition, we have run independent, new experiments beyond our submission which underline this on the scale which we report in our submission: in Table 1 in the author rebuttal PDF, we have conducted a comparison of our reported U-Nets with FNO models of similar size. These results match the results in PDEArena by order of magnitude (see Table 2 in author rebuttal PDF), noting that we use slightly different experimental configs. Both report the result that U-Nets outperform FNO. We hence conclude that describing U-Nets as “competitive” for PDE Modelling is an appropriate claim. --- Rebuttal Comment 1.1: Comment: Most of my concerns have been addressed by the author's response. Regarding the performance comparison of FNO and UNet, PDEArena and PDEBench draw the opposite conclusion, so the current benchmark paper on PDE is not a gold standard yet. So it's better for the authors to run PDE baseline models, such as FNO, by themselves and compare with their modified UNet model (after all one study case is PDE modeling, so it is better to not only have UNet result but also include other baseline models.). I understand that this paper is not solely about PDE modeling, and given the fact that my other concerns are addressed, I will increase my score to 5. --- Reply to Comment 1.1.1: Title: Response to: Official Comment by Reviewer ndAy Comment: We thank the reviewer for their suggestions which have improved the quality of our paper, and for raising their score in light of the discussion. In a potential camera-ready version, we will include the experiment we presented in the author rebuttal PDF, where we conducted an independent FNO baseline comparison to U-Nets. In this experiment, we observed that, for a comparable model size and on two datasets (Navier-Stokes, Shallow water), U-Nets outperform FNO in terms of r-MSE (across multiple random seeds), in some cases by an order of magnitude. We will also include a more thorough literature review on the topic.
Summary: The paper presents a unified framework for U-net design, by generalizing the overall structure of U-nets into different components. The framework is then investigated from a theoretical point of view. At its core, the paper presents Multi-Resnets which is a novel class of U-nets, which is then tested rigorously in the experimental section, in relation to generative modelling, PDE modelling and image segmentation. Strengths: It is always great to see papers that take a step back and tries to unify different work into a framework, to conceptualize what is up and down in a certain field. In that sense this paper success in doing this. In particular, I think sections such as 5.1 are some of the stronger sections of the paper because they conceptualize what the role of the encoder is. Originality: The framework presented in the paper seems to be original compared to other work. Quality: The paper seems to be of a high quality, with both theoretical results to backup the claims of the paper but also empirical evidence to support. This in especially true, when the appendix of the paper is taken into account as it provide much more information, especially on the experimental section of the paper. Clarity: The paper is fairly clear in what it is trying to achieve, however since there are a lot of moving parts to the paper the overall structure sometimes suffer from this and the overall story become a bit "muddy". Significance: While the conceptualization of U-nets indeed is refreshing, I doubt this paper will have a huge impact on domains that use U-nets as most established fields seems to build incrementally upon each others instead of redesigning architectures. That said, only time will tell. Weaknesses: I did not in particular find any weaknesses of the paper. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Maybe out of scope for the paper, but let me ask anyway: The original reason for resnet being introduced was to solve the problem of vanishing gradients. Do you see a connection between this problem and your work? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors provide a lengthy explanation of what the limitations of the framework is in the appendix of the paper, however I think the authors should have definitely included this in the main paper. To me, while the framework generalizes U-Nets it does not necessarily guarantee that the design process becomes easier, as the authors also touch on L162-166. This indeed does seem to be a limitation of the framework, when the initial goal was to "..designing and analysing general U-Net architectures". Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback on our work. We are delighted that you appreciate the approach of "taking a step back" in our research and our analysis of the role of the encoder in U-Nets (Sec 5.1), as well as the originality and quality of our theory, experiments, and Appendix. Your support is greatly valued. We would like to respond to your review below. We appreciate the opportunity to address any points you have raised and further engage in constructive discussions about our research. > “It is always great to see papers that take a step back and tries to unify different work into a framework […] In that sense this paper success in doing this. Significance: [...] I doubt this paper will have a huge impact on domains that use U-nets as most established fields seems to build incrementally upon each others instead of redesigning architectures. That said, only time will tell.” On significance, we concur that many fields evolve through incremental advancements, but the very act of 'taking a step back' has enabled us to unearth several novel insights. For instance, our exploration into the role of the encoder in U-Nets led to the design of Multi-ResNets. Furthermore, our findings on U-Nets' self-similarity structure, high-resolution scaling behaviour, and the capability to integrate function constraints (like PDE boundary conditions) directly into the U-Net architecture are pivotal. We've also ventured into designing U-Nets for complex geometries, such as manifolds and spheres in particular. One of our aspirations with this paper is to encourage the exchange of U-Net design choices across different domains. As an example, the recent PDEArena benchmark [7] demonstrated the high applicability of U-Nets popular in computer vision to PDE surrogate modelling. Notably, in [7], U-Nets are competitive with other architectures (such as Fourier Neural Operators (FNO)) in that field. We aim to foster a common language for this go-to, widely-used architecture design and believe that many more advancements can be made through such cross-domain efforts based on a unified framework. Our hope is that this work will serve as a catalyst for the exchange of ideas and foster advancements in U-Net design, ultimately benefiting a wide range of applications and research areas. > “Maybe out of scope for the paper, but let me ask anyway: The original reason for resnet being introduced was to solve the problem of vanishing gradients. Do you see a connection between this problem and your work?” We thank the reviewer for this intriguing question. We have not extensively analysed the connection between our work and vanishing gradients, which have indeed been addressed in practice through the use of residual and skip connections. However, we firmly believe that there are fascinating properties on the vanishing gradient problem to be explored now that a U-Net is defined (see Def 1). Specifically, we are interested in defining a ‘stable U-Net’ (akin to similar work on stable ResNets [8]) by understanding the minimal requirements on the choice of $E_i$ and $D_i$ so that the model is guaranteed to have gradients bounded away from zero and infinity, independent of width and depth of the network. We acknowledge the importance of this area of research and look forward to further investigations into the stability properties of U-Nets. On another note, our work was indeed greatly motivated by the seminal ResNet paper [6] through the idea of *preconditioning*, which is the core design principle of ResNets and U-Nets alike (also visualised in Figure 2 of our submission). This ultimately led us to the insight of a conjugacy between ResNets and U-Nets (see Prop 1). > “The authors provide a lengthy explanation of what the limitations of the framework is in the appendix of the paper, however I think the authors should have definitely included this in the main paper.” We agree with the reviewer, the limitations discussed in Appendix A are important and useful to be shown in the main text. Following the reviewer’s suggestion, we will include this discussion in Sec 6 (Conclusion) of the main paper as part of the additional page in a potential camera-ready version of this manuscript. > “[...] To me, while the framework generalizes U-Nets it does not necessarily guarantee that the design process becomes easier [...]. This indeed does seem to be a limitation of the framework [...]” Our framework offers a decisive advantage in the design process of U-Nets, allowing successful design choices from one area to be effectively applied in another. One illustrative example is the construction of generative models over triangulated spaces, which has significant implications for modelling functions over complicated geometries. Specifically, when faced with the task of designing a U-Net for a diffusion model with a favourable inductive bias on a triangulated complicated geometry, we have demonstrated in Theorem 2 that utilising a Haar wavelet decomposition of the geometry naturally induces a desired inductive bias for generative modelling. By opting for a Haar decomposition of the triangulated domain, the selection of spaces $V_i$ and $W_i$ is immediately defined, with Theorem 2 further specifying $P_i$. In this sense, our framework makes designing U-Nets ‘*easier*’ by providing a theoretical framework in which design choices for each of the components of a U-Net can be made based on theoretical insight and by incorporating problem constraints, rather than based on an empirically-driven ‘trial-and-error process’. Notably, our framework also extends the applicability of Theorem 2 to a broader range of geometries, enhancing the generality of its theoretical contributions. In light of the reviewer's input, we recognise the need to make these connections more apparent in a potential camera-ready manuscript. We welcome any further suggestions on how to improve the communication of these ideas in our manuscript. --- Rebuttal Comment 1.1: Title: Thanks for your answers Comment: I thank the authors for their lengthy answer, to not only my own review but also my fellow authors. Especially, thanks for being very honest in your answers being aware of your papers weaknesses and still defending the strengths of your paper. The mentioned improvements to the potential camera-ready manuscript sounds like great improvements to the paper. I have no further questions and comments for the paper. I will keep my score at 7 as it is already in higher end based on the other reviews. --- Reply to Comment 1.1.1: Title: Response to: "Thanks for your answers" Comment: We thank the reviewer for their useful questions and remarks which have helped to improve the quality of our submission towards a potential camera-ready version. We are grateful for the efforts in carefully reviewing our work. Lastly, we thank the reviewer for maintaining the high score and viewing to accept our article.
Summary: The work proposes a unified mathematical framework to analyze and design U-Net. Authors highlight the importance of preconditioning and provides several highlights to design unet. Authors propose a new parameter-free encoder based on wavelet space. Strengths: 1. Sec 2 presents a unified mathematical framework for U-Unets. 2. The discussion of precondition parts and how to design various precondition for different tasks is insightful, such as sec 3.2, 3.3. 3. Authors also conduct various experiments to support the their claims and highlight the importance of precondition and network design. Some experiments are quite interesting to me, such as Sec 5.1 and 5.3. Weaknesses: 1. Though it shows improvement in other tasks, there are minimal improvements in diffusion model tasks. Authors should report performance of Multi-ResNet for diffusion models and compare it with existing U-Nets used widely in diffusion models. 2. I would like authors to comments on how to design Unet for diffusion model so that it can recover high frequency details in high resolution image generation task. The average pooling will filter high frequency noise, but also filter the high frequency data information. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: See weakness Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: See weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are glad to hear the reviewer appreciated our submission. > “Though it shows improvement in other tasks, there are minimal improvements in diffusion model tasks. Authors should report performance of Multi-ResNet for diffusion models and compare it with existing U-Nets [...]” It is important for us to stress that generative modelling with diffusion models is only one of the three tasks (besides PDE surrogate modelling and image segmentation) which we tackle to verify hypotheses and ideas derived from our unified U-Net framework. Diffusion models are hence not the core focus of this work, it is rather one (currently very popular) area of research where the U-Net architecture produces strong results. Our key findings are irrespective of the choice of neural network one uses to parameterise the NNs $\mathcal{E}$ and $\mathcal{D}$ (with the caveat that Multi-ResNets require a ResNet decoder). Comparing different U-Net architecture choices for diffusion models is hence a very interesting direction which should definitely be pursued, particularly in light of this paper, and we thank the reviewer for this idea. We yet believe it is not essential to support the contributions of this work. Besides, we would like to briefly recap 3 key improvements our unified framework provides *specifically for diffusion models*: A. Theoretical understanding: While not an improvement for diffusion models in their design or performance, we believe Sec 4 is a crucial improvement and contribution for understanding the understudied success of U-Nets in diffusion models, especially on images. To the best of our knowledge, Theorem 2 for the first time proves why a U-Net with average pooling as their projection operators $\mathcal{P}$, and regardless of the choice of operators $\mathcal{E}$ and $\mathcal{D}$ is a natural inductive bias for data under a forward noising process of a diffusion model. Specifically, Theorem 2 shows that – possibly in contrast to the intuition prior to this submission – that not all frequencies of an image are affected equally by the forward noising process. B. Diffusions over complicated geometries: Our framework enables the design of U-Nets over complicated geometries beyond the square, for instance CW-complexes or manifolds. This can for instance be used to design neural network architecture for diffusion models on the manifolds merely by encodes the topology of the data into U-Net itself. As a proof-of-concept with U-Nets over a Haar wavelet basis and on a triangular domain (see Sec 3.3 and Sec 5.3), and discuss how this can be extended via triangulations to design a U-net for a sphere (see Appendix A.2) and other triangulated spaces. This may be particularly successful as at present, such work often leverages simple, fully-connected networks as their neural architecture. For instance in [4] (also mentioned by reviewer go4z), an award-winning diffusion models paper from last year’s NeurIPS, which extends diffusion models to Riemannian manifolds, the authors exclusively use fully-connected neural networks in their experiments (see appendix O in [4], “Architecture”). Our work is hence complementary to such work, in the sense that it enables a large, previously unrecognised potential in the neural architecture. C. Multi-resolution training and sampling: As we experimentally demonstrate in Sec 5.2, the identification of the self-similarity structure of a U-Net in Sec 2 enables us to naturally train and sample from diffusion models on multiple resolutions via Algorithm 1 (see Appendix B.2), by training $\mathcal{U}$ on resolution $i-1$, then training $U_i$ while preconditioning on $U_{i-1}$. Two advantages arise: First, if data is available on a higher resolution $i$, we can reuse the U-Net on a lower resolution $U_{i-1}$ in a principled way, i.e. by preserving the self-similarity structure of the neural network. Second, we can checkpoint sampling models when prototyping, look at and possibly measure the quality of lower-resolution samples (see Fig. 6) first, which may be indicative for the quality of the higher resolution. > “I would like authors to comments on how to design Unet for diffusion model so that it can recover high frequency details [...].” We find the reviewer's question intriguing. While we are uncertain we believe that the reviewer's inquiry pertains to super-resolution applications. Super-resolution techniques hinge on identifying long-range dependencies within image structures to predict higher resolutions. For the purposes of our response, we will assume that our image super-resolution algorithm is trained using high-resolution training data. First, we require the U-Net model to accommodate images of varying resolutions. The essential design requirement is the U-Net's ability to accept images of different resolutions. The original U-Net lacks this capability. However, our Multi-ResNet inherently addresses this issue, enabling the U-Net to handle input images with varying resolutions seamlessly. This is because the encoder has no learnable parameters, and in essence, makes various projections onto varying resolution spaces of the original image. Suppose now that we are given data on resolution $V_{i+1}$ (a higher resolution than $V_i$) and train a diffusion model with a Multi-ResNet on this data. Then we are given data of resolution $V_i$ and asked to produce a super resolution version of this image. What is important here is, as the reviewer says, that in this setup, any image of arbitrary resolution can be effortlessly embedded into one of the resolution input spaces, denoted as $V_i$, through interpolation. This elegantly extends the U-Net model to support resolutions of any scale, making it a versatile and straightforward approach applicable even to infinite resolution scenarios. This model may not perform well, but being the simplest U-Net model that is adaptive to this context, it forms a good baseline method to ablate from.
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable and insightful comments and questions. Motivated by the reviews, we make several changes towards a potential camera-ready version of the submission, using the one additional page, which we list below. ### Summary of key changes towards a camera-ready version * A note on how to extend our framework to arbitrary channel dimensions using the Kronecker product in Sec 2.1. * A paragraph on the limitations of our work in Sec 6 (Conclusion), based on the respective section in Appendix A. * Algorithm 1 and Appendix A.3 as a condensed version, now in Sections 5.2 and 3.2. ### Response on other points from reviewer go4z: > “Algorithm 1 for staged training should be in the core paper [...] a U-Net that guarantees boundary conditions should be in the core paper.” We will address this clarity issue by incorporating practical approaches for guaranteed boundary conditions with a U-Net, adding Algorithm 1 to Sec 5.2 and a condensed version of Appendix A.3 in Sec 3.2 as per the reviewer's insightful feedback. > The formal definition of the U-Net is hard to grasp at first read. There should be a correspondence between the elements introduced and the typical 2D image segmentation U-Net architecture. We agree and will provide a figure further illustrating Definition 1 in a potential camera-ready version. > To me preconditioning means finding a matrix close to the Hessian inverse in an optimisation procedure [...] Here it seems that preconditioning is more used to describe a good initialisation [...] We acknowledge the reviewer's observation that “preconditioning” can refer to finding a matrix close to the Hessian inverse for an optimisation procedure. However, we would like to emphasise that “preconditioning” is a term which implies different meanings across various contexts in mathematics. The seminal ResNet paper uses the word ‘precondition’ to indicate ‘initialisation to the identity: “In real cases, it is unlikely that identity mappings are optimal, but our reformulation may help to precondition the problem [6].” As this U-Net work has strong ties with ResNets (e.g. their conjugacy in Prop 1), we considered using it in the same way seems appropriate. We appreciate the reviewer's feedback, which allows us to enhance the clarity and quality of our work. > “[...] how the decoder should be designed in order to achieve these subspaces.” We value the reviewer's input on improving our manuscript quality. The decoder's design corresponds to predicting linear combinations of subspace $W_i$ basis vectors. Haar wavelets use a pixel grid as a valid basis, Fourier layers for instance transform into linear combinations of Fourier modes of certain frequencies. Generally, $w_i \in W_i$ is expressed as $w_i = \sum c_i e_i$ with basis vectors $e_i$. The neural network predicts coefficients $c_i$ for a fixed data basis. We'll explicitly address this in the paper update and appreciate the reviewer's observation which helps clarify this point. > “It also explains why in practice, encoders are often chosen significantly smaller than the decoder [26]’. I am not sure about this fact [...]” [26] indeed focuses on hierarchical VAEs, which however employ U-Net-style architectures, extensively discussed in [5] (Appendix B.2). Notably, [26] presents an interesting scenario with a smaller encoder than the decoder, possibly explained by our framework. We'll revise 'often' to 'in some cases' to better reflect varying encoder-decoder choices in diffusion models. Thank you for this remark. > “What is meant by self-similarity?” We appreciate the reviewer's input on terminology clarity. "Self-similarity" in our context refers to refining a square image into smaller squares to establish nested approximation spaces ${W_i}$ for our U-Net. This enables the U-Nets' adaptability to recursively refined geometries. While acknowledging the non-standard nature of this term, we'll enhance precision in Sec 2.2 to address this feedback. Thank you for helping refine this aspect of our work. > “What is it [GNet] and does it fit with the experiments reported in the paper?” We appreciate the reviewer's notice of a code typo. "GNet" was our code name for the ‘general’ U-Net class. This term was specific to initial diffusion experiments, and we will correct it in the potential camera-ready submission. > “U-Net acts on feature spaces, i.e. spaces with much more feature channels than the image itself, how would you take that into account?” Our framework easily extends to multiple channels by utilising the kronecker product to expand W_i to the desired channel count in a U-Net. In a potential camera-ready version, we'll add a note in Sec 2.1 and a dedicated appendix section for the construction details. Thank you for this clarifying comment. ### References: [1-3]: see Reviewer ndAy [4] De Bortoli, V., et al. 2022. Riemannian score-based generative modelling. Advances in Neural Information Processing Systems, 35, pp.2406-2422. [5] Falck, F. et al., 2022. A Multi-Resolution Framework for U-Nets with Applications to Hierarchical VAEs. Advances in Neural Information Processing Systems, 35, pp.15529-15544. [6] He, K., et al., 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778). [7] Gupta, J.K. and Brandstetter, J., 2022. Towards multi-spatiotemporal-scale generalized pde modeling. arXiv preprint arXiv:2209.15616. [8] Hayou, S., et al., 2021, March. Stable resnet. In International Conference on Artificial Intelligence and Statistics (pp. 1324-1332). PMLR. [9 Hoogeboom, E., et al., 2023. simple diffusion: End-to-end diffusion for high resolution images. arXiv preprint arXiv:2301.11093. Pdf: /pdf/685950451a95d4f1054a1b14fc2fc3e8ac2d379f.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
A database-based rather than a language model-based natural language processing method
Reject
Summary: This paper proposes a new method for natural language processing. Instead of using language corpus, authors suggest to use database. Sentence generation is a linear schematization of a database-based representation. It is indeed an interesting idea. Strengths: Authors propose a brand new NLP approach that is more closer to the way the human brain processes information, and very likely on the way to a brand new neural process for NLP. Weaknesses: There is no experiment described in the paper. Authors shall finish experiments, then submit papers. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Could we understand database as a base of data, which includes natural language descriptions? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 1 poor Limitations: Please read papers about mental spatial models, to see how simple spatial descriptions are generated from mental model. For example, why most people say, San Diego is located further west of Reno? What could be the structure of the 'spatial database' in our neural mind? Flag For Ethics Review: ['No ethics review needed.'] Rating: 1: Very Strong Reject: For instance, a paper with trivial results or unaddressed ethical considerations Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q11: Database (R4)** The “database(s)” is used to store information (knowledge). It is equals to the memory of the human being. The database factually describes the information (knowledge) in the real world and how they are stored and organized in the human brain. Please refer to the reply of Q2. **Q12: “Why most people say, San Diego is located further west of Reno? ” (R4)** The general explanation is that the memory in the human brain does not always reflect the real world 100%; it may contain some wrong information or confusion, such as, you may have misremembered the discussion deadline, but the memory can be corrected. Similarly, the model (or database) in the proposed method can also be modified. For the specific spatial relation model of "San Diego" and "Reno", sorry, I don’t know these two entities, the possible explanation is that: 1)“San Diego” or “Reno” has been listed in an improper layer (refer to the layer in Figure 5), this case and its solution have been discussed in lines 69\~72; 2) there is more than one directed edge between “San Diego” and “Reno” (refer to the L5 in Figure 7), the error occurs during the derivation, i.e., the derivation demonstrated in lines 139~ 151. No worries; everything can be rewritten. **Q13: “Read papers about the mental spatial models” (R4)** During my research, I did extrapolate the new paradigm to other related disciplines, such as classification problems, psychology, philosophy, etc. As far as the results are concerned, the new paradigm not only brings a completely new interpretation to the above disciplines or problems, but also provides some ingenious solutions.
Summary: This paper aims to take a novel standpoint with respect to all the neural network architectures working on natural language (NN-NL). According to the authors, these NN-NLs work on the surface of the language, disregarding that the language is encoding information. In their opinion, information should be represented as entities and relations as in databases. Consequently, the paper proposes NN architectures working on trees and working on different levels of information encoding (Fugure 5). Clearly, relations have properties such as transitivity that should be used during training and inference. Strengths: - The paper seems to propose a revolutionary way of thinking to neural networks Weaknesses: - The idea of information is a little weird. First, in this paper, the term information stands for structured information, and, apparently, the overall idea is to use structural information as the native form of encoding information as in databases. This is a little strange as there are not very large corpora of structured information. Indeed, having a large corpus is the key of success for these neural networks. - The paper does not mention that the overall field of NLP was devoted to take Natural Language to a structured representation. From there, tasks were solved, eventually going back to natural language for specific tasks like dialog, question answering, and so on. This structured representation is at different levels: morphology, syntax, semantics, and, sometimes, pragmatics (e.g., in the form of speech acts). Only in the last decades NLP has pushed tasks from NL to NL (e.g., natural language inference, dialog) with architectures based on Machine Learning, which may be agnostic with respect to the structured representation of language utterances. - Structured information mentioned in the paper may be correlated with the semantic representation of natural language. This is not even mentioned. - There is a large body of studies on probing transformers to see if they replicate an NLP pipeline and to understand how they encode syntax, semantics and so on - The basic idea of NN is to encode text, which may be retrieved and used by means of other similar text. This is encoding "information." - The paper does not propose a dataset to work with. - The paper has no experimental section Technical Quality: 1 poor Clarity: 3 good Questions for Authors: See weaknesses Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 1 poor Presentation: 3 good Contribution: 1 poor Limitations: Not applicable Flag For Ethics Review: ['No ethics review needed.'] Rating: 1: Very Strong Reject: For instance, a paper with trivial results or unaddressed ethical considerations Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q6: Large corpus and Neural network (NN) (R3)** Large corpus are the sample sets for training NN models; the NN models are a statistical inference model. See the reply in Q1; the statistical inference models are unsuitable for NLG and NLU problems. _The research object of my work is the information encoded in language, and the organizational and compositional structure of the information described in languages (lines 5\~7; lines 26~28),_ which differs significantly from the study object of the NN models-based methods. They are two different research methods with two other research objects. Therefore, corpus and NN are not mentioned in my work. **Q7: "Taking Natural Language to a structured representation." (R3)** I beg to differ, and my work is devoted to studying and revealing the structure of how information is organized and stored in the human brain. _Due to a small proportion of information in the human brain being encoded as natural languages for external output(lines 227~232),_ thus, we can take natural language as the $\underline{\text{medium of study}}$, but not the ultima $\underline{\text{research object}}$. **Q8: "The basic idea of Neural Network (NN) is to encode text." (R3)** I agree that the basic idea of NN is to encode text. _However, I believe that the ultimate goal of NLP is enabling machines to understand and use natural language as humans do (line 15)_. And I will propose a brand-new neural network in another paper or my book. _The new neural network_ is not based on the _chain rule_, and its working mechanism is more similar to the neurons in the human brain. **Q9: "The structured representation is at different levels: morphology, syntax, semantics, and, sometimes, pragmatics" (R3)** All the concepts or methodologies mentioned above are derived and developed to study NLP problems. According to the first principles, after rediscovering and reconceptualizing natural language, _i.e., natural language is essentially a way of encoding information (lines 2\~3, line 24), and sentences encode not only the specific information to be conveyed, but also the processing requests for that information (lines 161\~165, lines 190\~192); The sentence understanding task consists of two parts: a) understanding of the processing requests of the specific information implicit in a sentence, and b)understanding of the specific information conveyed in the sentence (lines 186\~188)._ Naturally, the research methodologies will be adjusted and modified accordingly. **Q10: Lack of dataset to work with (R3)** See the reply in Q1. ------------------------------ Special thanks to you for helping me review the previous work on NLP, so that I can expand my knowledge and think more deeply in the comparison process. As the saying goes: "If you don't pull out the light, you can't understand." --- Rebuttal Comment 1.1: Comment: Hi, it was nice to discuss this with you. And I know it is hard to think in a completely new paradigm, and I have almost completed all the basic model parts, which gave me enough confidence in my work. Thus, whether the article is accepted or not, the work that follows will not be affected. So is there anything you'd like to discuss that interests you?
Summary: This paper studies a database-based natural language processing method and proposes a tree-graph hybrid model based on three types of spatial relations. The model is further applied to both natural language generation and natural language understanding tasks. Strengths: The insight of borrowing human cognition to develop a natural language model processing method is worth exploring. Weaknesses: This paper lacks experimental observations and analyses to validate the effectiveness of the proposed method or support the conclusion. Technical Quality: 1 poor Clarity: 2 fair Questions for Authors: 1. Capitalization typo in the title, line 9, and line 49. 2. Adequate literature review is needed. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 1 poor Presentation: 2 fair Contribution: 2 fair Limitations: Experimental results and analyses are needed to validate the effectiveness of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q4: Typos (R2)** Thanks for your kind reminder; I will thoroughly check and fix them in the revised version. **Q5: Literature review (R2)** I tried to find some literature at the beginning of my work, but unfortunately, it was unavailable. After that, I throw myself into my work.
Summary: This paper advocates the separation of language and knowledge. It proposes a database-NLP method. The knowledge is contained in one or more databases whose internal structures can be a tree, a graph, or a hybrid. There are associated methods to query and retrieve from the database(s) the information. Finally, some "chain" based NLP methods connect the various tasks with the databases. In this paper, the database holds spatial relationship among various locations (e.g. Duck University is a location, Tennessee is a location, "the fridge" and "Tom's room" are locations). Overall, this reviewer feels that the approach proposed in this paper is very close to known art. Strengths: This approach has a very focused usage scenario. It can be used in a situation where "hallucination by modern language model" is not permitted. Weaknesses: Overall, this reviewer feels that the approach proposed in this paper is very close to known art. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: This reviewer does not have any questions to the authors. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 1 poor Limitations: The authors clearly presented the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
null
Rebuttal 1: Rebuttal: Thank you for reading and commenting on my work. I look forward to more opportunities to communicate in the future. **Q1: Lack of Experimental Section (All)** I apologize for the operating error in the checklist on the initial submission. For the experiments option, I selected “yes”; actually, it should be a “n/a”. The experimental method is not a suitable option to verify the effectiveness of the proposed method for the following reasons. 1. The proposed model is a data structure, not a statistical inference model. What describe in the proposed model are facts. Therefore, there is no need to verify its validity experimentally. 2. Experimentation is not the only optional research method. Experimental methods are primarily used in the _natural sciences_, such as physics and biology. Logical testing methods are more popular in _formal sciences_ such as mathematics, logic, basic computing science, and other disciplines where artificial concepts are studied. The logical testing of the proposed method is listed in a separate item. 3. In the areas of NLU and NLG, experimental results derived from _statistical inference models_ (e.g., language representation models) are not reliable; see proofs below: > `Proof1:` The key assumption must be met to ensure language representation models work: $\underline{\text{understanding depends on context}}$. However, the assumption is untenable and easily disproved.\ &emsp; *Experiment:* To understand the word "egg" without context.\ &emsp; *Process:* We can understand the word "egg" by searching for the relative information (knowledge) of the entity "egg" in our memory (databases). For example: "color", "shape" and "taste" of eggs, "the texture of the eggshell", "eggs as food", "the relation between eggs and chickens", "some relevant scenes", etc. All the recalled information(knowledge) in our memory makes up our understanding of the word "egg". As shown in Figure A, if we know nothing about the entity" egg", the word "egg" is not understandable; by contrast, the drawing" egg" provides more clues to understanding.\ &emsp; *Result:* It is easy to see that language understanding does not depend on context but relies on the relevant information (knowledge) in memory (database). Furthermore, the level of understanding of the word "egg" depends on the amount of relevant information (knowledge) in the memory (database). It won't exceed the scope of the memory (database). Note that only a fraction of the information (knowledge) involved in the understanding process (the data processing process) is encoded in language, forming the collectable samples. > `Proof2:` Unbiased sample sets (corpus) are not available in the real world, which leads to the unreliability of statistical inference models trained on biased sample sets.\ &emsp; Language is a tool used to exchange information (knowledge) between people, who tend to encode and transmit information (knowledge) that the other parties do not know in language. Information (knowledge) shared by both parties is rarely encoded in language. Moreover, due to the limitation of language as an encoding tool, some of the information (knowledge) involved in the process of understanding cannot be encoded in language. Therefore, the above selective tendencies and the tool's limitation have resulted in an overall bias of language as samples (see Figure B). **Q2: The reliability verification of the proposed method (All)** Let's look at the following concepts from a different perspective. Then the logical relationships between them will be self-evident. `Axiom:` The model proposed in Figure 5 is actually an axiom that factually describes the spatial relations between entities in the real world. An axiom does not need to be proved. `Proposition:` Sentences read from the model (or database) are the propositions. The proposed NLG method is to derive propositions from the axiom. For example, all the sentences in Table 7 are propositions derived from the model in Figure 5. The proposed NLU method is to verify propositions with the axiom. For example, all the sentences in Table 7 are verified as true. The sentence "The cat is in the fridge" is verified as false. In the NLU process, if a given proposition is known to be true, the new information (knowledge) brought by the proposition can be written into the model (or database), further expanding the axiom. This is a self-learning process. **Q3: Naming issue of the research object (R3)** Thanks for raising the concern. Yes, it is crucial to clearly and accurately define and describe the research object in our work. In this paper, I use "information" to name the research object, which is only a compromised option. The other candidates are "knowledge" and "data" (see Figure C), but the things encoded or represented by the words "knowledge" and "data" cannot cover all that is involved in the NLG and NLU processes. After careful consideration, "information" is chosen. I know that "information" is a less clear-cut concept, and this issue may need to be discussed within the field. I will replace the "information" with "information (knowledge)" in the updated version temporarily. Pdf: /pdf/494415ac0d4a2536837324d15f56d7d3f2998c5c.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
The Pick-to-Learn Algorithm: Empowering Compression for Tight Generalization Bounds and Improved Post-training Performance
Accept (spotlight)
Summary: This paper introduces a meta-learning algorithm which compresses a training set $D$ of size $|D|$ into a smaller training set $T\subset D$ of size $|T|$. The algorithm can be defined for any *base learner*, i.e. any training strategy which outputs a trained hypothesis $h$ when given a training set. The set $T$ is selected in such a way that the performance of the base algorithm trained on $T$ and evaluated on $D$ is better than a predetermined threshold. Relying on some powerful recent results from [1], the authors show that one can obtain tight generalization bounds for the result of the algorithm expressed as a function of the size ratio $|T|/|D|$: intuitively, if the model is able to compress the dataset $D$ into a much smaller dataset $|T|$ whilst maintaining the performance of the base learner trained on $T$, it must be that the sample size is already sufficient to train the model well. Thus, the results can be interpreted as a middle way between on the one hand the traditional approach to generalization bounds via direct study of the function class searched by a given algorithm, and on the other hand, the even simpler *test-set bounds*[2]. The function of the ratio $|T|/|D|$ which gives a bound on the probability of error is a complex implicitly defined function inherited from [1] with a very mild dependence on the failure probability $\delta$. The algorithm functions as follows: start with an initial hypothesis $h_0$, then find the element of the training set $D$ which is the least well explained by $h$ (for instance, the one with the largest loss function value), and include this element in the set $T$. Retrain the hypothesis based on $T$ and repeat the procedure until either every element of the set $D\setminus T$ has a loss function value below a given predetermined threshold. The proof of the generalization bounds is a reasonably direct consequence of Theorem A.4., which is a powerful result from [1] concerning compression schemes. The key is to show that the compression function defined by the algorithm (the function which compresses $D$ into $T$ satisfies the properties of *preference* and *non associativity*, which are proved in Lemmas A.5. and A.6 respectively, and that the failure event which is bounded in Theorem A.4, which is the *probability of change of compression* is implied by the failure of prediction event (this is shown in Lemma A.7.). A technical condition of non concentrated mass which is required to be satisfied for Theorem A.4 to hold, is also removed by a straightforward argument augmenting each datapoint by a continuous random variable which doesn't affect the loss function/order. It is also shown (Proposition A.9) that any compression scheme which outputs a single element and is preferent must correspond to a maximum function with respect to some well defined notion of order. Experiments on the MNIST dataset and on a synthetic regression dataset demonstrate that the proposed algorithm yields superior generalization performance and tighter bounds than the Pac-Bayesian baseline, and proide slightly inferior bounds compared to a test-set approach. It is still worth noting nonetheless that the test performance is still superior to the test set approach due to a better indirect exploitation of the whole training set in the compression step. ==============Post-rebuttal========== As seen in the comments below, my discussion with the authors and the extra material promised (especially the experiments in the rebuttal concerning support vectors) have increased my opinion of the paper, resulting in me raising my score to 6. I believe this paper is above the threshold and tackles very interesting questions, with the main downside being a relatively small amount of material and original proofs (which should not necessarily disqualify a paper for publication). ===== **References** [1] Marco C. Campi, Simone Garatti. Compression, Generalization and Learning. ArXiv 2023. [2] Langford, J. and Shapire, R. Tutorial on practical prediction theory for classification. JMLR 2017. Strengths: 1. This is a very interesting direction, not just in terms of providing generalization bounds (which is the main aspect presented in the paper) but also in terms of potential practical interpretation: this is a simple algorithm that, in principle, allows one to compress a dataset into a much smaller one, allowing us to derive interpretable information about which datapoints are key to the training procedure. 2. This is a novel application of a mathematical result from [1] into a machine learning context, and has a distinctly novel flavor compared to many existing generalization bounds. 3. The paper is relatively well written and a pleasure to read. The proofs are clean despite the fact that they must have been a bit annoying to write: It is quite trivial to convince oneself that the results (lemmas A.5, A.6, A.7 and Proposition A.9.) hold, but not so much fun to write down explicitly. Weaknesses: 1. The results are essentially straightforward applications of existing results from [1], there isn't really any non trivial difficulty that had to be vanquished in the proofs. In short, this paper isn't a lot of work from the authors. 2. The experimental evaluation is still relatively preliminary, there is so much more to explore: Other datasets and architectures, better baselines, and more importantly, the implications in terms of interpretability. For instance, the following follow-up experiments could be performed: 2.1 Compare to more baselines on more datasets (Cifar, other Pac Bayesian bounds). In footnote 7 on page 7, the authors claim that "Pac-Bayesian approaches have been developed only for linear regression problems". This is a highly doubtful or (at best misleading) statement. In addition, generalization bounds based on function class capacity could also be evaluated in the synthetic regression dataset. 2.2 (most important) Investigate the interpretability of the results: for instance, it would be interesting and rather key to evaluate the method on a synthetic binary classification dataset with a simple kernel method. It seems that in an ideal situation, the set $T$ should eventually correspond to, or at least strongly overlap, the set of support vectors. Even in the case of MNIST, it would be very interesting to visualize the chosen datapoints and see if they have something qualitatively different from the non-chosen ones. 3 A very big issue is that the algorithm, as presented, is not that applicable in practice. Indeed, to make the algorithm work in the examples considered, the authors needed to pretrain the model on a significant proportion of the training set. In particular, the algorithm cannot select which datapoints in the set used for pretraining are more important than others. Since the algorithms used rely on gradient descent and the authors interpret the call to the base learner as a single gradient step, the initialization is key. It remains to be seen whether the algorithm in its pure form, with a base learner that performs exact empirical risk minimization on $T$, can work in any practical scenario. At the very minimum, it should be checked whether training the network on $T$ from random initialization (rather than pretraining it to obtain $h_0$ and then continuing training with $T$, which is what is done here) yields comparable performance. 3.2 (related to 3) It seems like the extension to the case where several datapoints are introduced into $T$ simultaneously (perhaps in a hierarchical way) would not be too much to ask in a first submission, since it is key to solving the problem of the overreliance on initialization. 4. I feel like the description of the previous results could be more extensive, for the benefit of the reader. For instance, the following things could improve the paper's reach and interest to a broader audience: 4.1 Explain the gist of the proof of theorem A.4., at least in terms of intuitive reason why the function $\Psi$ from line 187 appears. 4.2. Write down the result (theorem 3.3) from [2] which is used to evaluate the test set bounds and discuss it. 4.3 In line 472, the sentence "See also Section 4 in [1]" appears to suggest that the connection between the probability of change of compression and the classification/regression error is already established in [1]. If that is the case, how much of the present paper is truly not covered or implied by the discussion in [1]? 5. I feel like a more detailed description of existing literature on training set compression schemes and how they relate to the present method is needed. It is hard to believe that no such literature exists. **Minor comments/typos** The concept of "probability of change of compression" is frequently used in the main paper (cf., e.g., line 341, but it is only defined in the appendix (455-457) Line 28: "the precision of available bounds is much problem dependent" ===> "the precision of available bounds is highlyproblem dependent" line 64: "is laying the groundwork" ==> "lays the groundwork" line 110 "is add to" ===> "is added to" Lines 114 and 115: "enough appropriate" ==> "appropriate enough" Line 175: "an hypothesis" ==> "a hypothesis" Line 280: "fits well the data" ==> "fits the data well" lines 480 and 560: I would use "remove the condition" instead of "release the condition" **References** [1] Marco C. Campi, Simone Garatti. Compression, Generalization and Learning. ArXiv 2023. [2] Langford, J. and Shapire, R. Tutorial on practical prediction theory for classification. JMLR 2017. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Do you think your algorithm will be able to recover support vectors in a simple linear classification problem? In Table 1 on page 7, which proportion of the training set is used for pretraining? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Mostly the reliance on intialization, the limited evaluation of interpretability and the coverage only of the case where a single element is added to the set $T$ at each iteration. See "weaknesses" for more details". Assuming there is really no comparable result in the literature (i.e. no generalization bounds expressed in terms of the success of a compression method), this is still a **very interesting paper opening up a new direction**. However, the amount of content included in this first contribution is surgically small. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for the positive review and constructive comments. In addressing them, we have produced additional material (see attached pdf and below) that we believe will improve the quality of the manuscript. **Weaknesses** * 2.1. We concur that our sentence on PAC-Bayes methods in regression problems can be misleading and aim to remove it. We will also evaluate generalization bounds based on class capacity as suggested. * 2.2. Many thanks for this valuable comment that touches upon an important aspect. We are already able to offer some insights. Specifically, we ran: i) SVM and P2L+SVM as inner algorithm on a (new) synthetic 3d linearly separable classification problem; ii) SVR using RBF kernels and P2L+SVR as inner algorithm on the regression problem presented in Section 6. Interestingly, in both cases, the behaviour is that expected by the Reviewer: the compressed dataset T eventually contains all the support vectors with the addition of few other examples that are added to T in the first iterations, while the algorithm is still learning what are the significant examples. Please see Figures 1 and 2 in the attached PDF. We will add this material to the paper along with some considerations on the chosen examples in the MNIST classification problem. * 3.&nbsp;We would like to clarify two important points. * First, the inner algorithm, referred by the reviewer as “base learner” (denoted by $L$ in the paper), does not perform a single step of gradient descent, but instead iterates gradient descent until convergence, i.e., “performs empirical risk minimization on T”. As such, the algorithm is given enough freedom to move towards a more informative hypothesis, as guided by the points in T. This is why in the numerical examples the post-training performance of P2L is independent of the portion of data used for pre-training (Fig 2 in manuscript, red line). We apologies if this was not clear -- we will do so in the final version. * Second, we observe that P2L works also without the use of pretraining, see for example the application to synthetic regression in Figure 4 (top panels, red curve at abscissa equal to zero). The motivation for introducing the pre-training is solely that of improving the resulting bound to the risk. Indeed, when starting from an educated guess (i.e., using some data to pretrain $h_0$), the choice of “worst misclassified” point is more meaningful allowing the algorithm to terminate with a smaller set T and thus providing better generalization bounds. Still, good results can be obtained without pre-training and we plan to include evidence for this also for the MNIST example.\ In this context, it is also worth remarking that the choice of pre-training factor can be thought of as a hyper-parameter, for which our generalization bound provide a methodology to make an optimal choice. Such optimal choice is application specific (e.g., 50% for MNIST, while only 10% for the regression problem). * 3.2. We thank the Reviewer for their interesting comment. As for the possibility of introducing datapoints in T simultaneously, we note that this can be readily achieved and that our methodology directly accommodates this extension.\ While we plan to include a discussion on this point in the final version of the manuscript, we observe that adding more than one point to T might or might not have a beneficial effect on P2L. Indeed, on one hand, adding more datapoints might allow to build a better estimate of $h_0$ earlier on. However, by introducing these datapoints in groups (and thus not allowing for the fine-grained choice of adding them one-by-one) might also result in a larger set T, which could worsen the final bound.\ To illustrate this, we have run the same synthetic regression problem considered in Section 6 (with no pretraining) comparing the case in which, at each iteration, we select only the one/two/three/four/five worst datapoints. The results are presented in Figure 3 in the attached PDF and showcase how, for the problem considered, adding a single data-point at a time is optimal. * 4.1 and 4.2. We will do so. * 4.3. In [1] it is only observed that whenever inappropriateness implies a change of compression, then the probability of inappropriateness (the risk) is no bigger than the probability of change of compression. To use this result, however, one has then to show that inappropriateness indeed implies a change of compression, a property which is not straightforward since it is not satisfied by many learning algorithms. The fact that this property always applies for the P2L algorithm is proved here for the first time (Lemma A.7). In general, apart from Section A.1, which is a summary of results used in this paper, all the material is new. * 5.&nbsp;We will improve the description of the existing literature. In particular, we will mention the literature referred to as “data compression” that aims at reducing a given dataset for computational purposes [Toneva et al]. However, these works differ from our since they are not amenable of the generalization bound. We will also contrast our work with existing compression schemes, which include [1]. These works provide generalization bounds but are typically applied to learning algorithms that have built-in compression properties, e.g., SVM. Our contribution departs significantly from them by offering a methodology to enforce a (preferent) compression also when starting from algorithms that do not have this property, e.g., neural nets. To the best of our knowledge this is the first contribution in this space. * Minor comments/typos: Many thanks for your accurate reading of our paper. All the suggested modifications will be implemented. **Questions** * “Do you think …”: Please, see the response to your point 2.2 * “In Table 1 …”: The Table reports the results for pre-training proportions leading to the lowest risk bounds for each method, i.e., 50% for P2L, 70% for SGD+test-set, 60% for PAC-Bayes. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal; proof for introducing many datapoints at a time? Comment: Thanks for the very detailed rebuttal. I especially liked the graphs in the PDF showing the interpretability of the chosen points, and I think including this in the final version will indeed substantially improve the quality of the manuscript. Many thanks also for the clarification regarding the number of gradient steps and the promise to incorporate this in the final version. Regarding the addition of several points at the same time (my comment 3.2), I acknowledge your answer and I agree that doing so may actually worsen performance. It is a good idea to incorporate your comments in the final version. However, you also claim that the same techniques can be applied to prove an analogous result when you incorporate several points at the same time. How easy is it? Can you add the complete proof in a pdf and then in the appendix of the final paper? Regarding 4.1, you promised to explain the gist of the proof in the final version. Could you do so here as well? --- Reply to Comment 1.1.1: Title: Proof + gist of Theorem A.4 Comment: * Regarding the addition of several points: Many thanks for asking. Interestingly, a simple modification of the original algorithm P2L is sufficient to obtain the desired result, without having to modify any of the proofs. In the following, we describe the modified algorithm which allows one to incorporate $R$ points at every iteration:\ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1. Initialize $T =\emptyset$, $h = h_0$, $z_1,\dots,z_R = \max^R_{h} (D_s)$\ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2. *while* $z_i \neq$ Stop for all i, do\ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3. &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; $T\gets T\cup${$z_1,\dots,z_R$}\ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4. &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; $h\gets L([T]_A)$\ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5. &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; $z_1,\dots,z_R \gets \max^R_h (D_s)$\ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 6. *end while*\ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 7. $T\gets T\cup${$z_i : z_i\ge$ Stop}\ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 8. Return $h,T$\ where $\max^R_h(U)$ returns the $R$ maximal points (the maximum, the 2nd maximum, \dots, the $R$-th maximum) of $U$ (if $U$ has less than $R$ elements, then the whole $U$ is returned). The key observation is that this algorithm is completely equivalent to a sequential algorithm (where only one point is added at each iteration) that is identical to Algorithm 1 in the paper, except in that $h$ is updated every R iterations:\ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1. Initialize $T =\emptyset$, $h=h_0$, $z= \max_h (D_s)$, $iter=0$\ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2. *while* $z \neq$ Stop, do \ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3. &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; $iter \gets iter+1$\ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4. &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; $T\gets T\cup ${$z$}\ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5. &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *If* $\mod(iter,R)=0$, *then* $h\gets L([T]_A) $\ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 6. &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; $z \gets \max_h (D_s)$\ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 7. *end while*\ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 8. Return $h,T$\ Illustrating this equivalence is enough to secure the result, because for this sequential algorithm the proof of Theorem 4.2 applies word by word, without any modification. * Regarding the gist of the proof of Theorem A.4: To derive the result, in the proof of Theorem A.4 it is provided a characterization of all possible compression schemes in terms of certain probability measures that are needed to evaluate $\mathbb{P}$ {$\underline\varepsilon(\tt{c}(D)|,\delta) \le \phi(D)\le \overline \varepsilon(|\tt{c}(D)|,\delta)$}; then, the maximal value of $\mathbb{P}${$\underline\varepsilon(|\tt{c}(D)|,\delta) \le \phi(D)\le \overline \varepsilon(|\tt{c}(D)|,\delta) $} over the compression schemes characterized as described above is computed, which leads to a non-conservative upper bound to $\mathbb{P}${$ \underline\varepsilon(|\tt{c}(D)|,\delta) \le \phi(D)\le \overline \varepsilon(|\tt{c}(D)|,\delta)$}. The ensuing maximization problem is infinite dimensional and its solution is obtained by duality. Interestingly, no conservatism is introduced at this stage because strong duality holds. The final expressions for $\underline\varepsilon(|\tt{c}(D)|,\delta)$ and $\overline \varepsilon(|\tt{c}(D)|,\delta)$ given in the statement of Theorem A.4 are obtained by studying the dual problem. See [Campi & Garatti, 2023] for further details.
Summary: This work elaborates on the recent breakthroughs of Campi&Garatti 2023, by exploiting compression theory results to design a novel meta-algorithm, namely the Pick-To-Learn (P2L) algorithm. This algorithm aims to compress the dataset to a smaller, truly impacting one, this notion of impact being defined through a hypothesis dependent order $\leq_h$. Authors provably show high-probability generalisation bounds for P2L and experimentally shows that P2L with gradient descent as subroutine yields better theoretical results and experimental performances than both the test-set approach and PAC-Bayes learning. In conclusion, I am convinced by the P2L algorithm, and it could be enough for acceptance at this point. However, I remain doubtful about the experimental process, see the Questions part. **References** Wu et al. 2022 : Split-kl and PAC-Bayes-split-kl Inequalities for Ternary Random Variables Strengths: I found the theoretical part of this work utterly interesting as it offers a breath of fresh air in the generalisation field (at least up to my knowledge). Weaknesses: I have several concerns about the experimental setup (in particular the comparison with PAC-Bayes theory). See the 'Questions' part. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - To perform their experiments, authors split MNIST in 60 datasets of size 1000 to enlighten how well works P2L when few data are available, which corresponds to many real-life situations. However, there is other situation where a huge amount of data is available. Thus, I am wondering about the performance of all three methods when trained on all, 60000 data simultaneously. Did the authors try to perform this experiment? - Appendix B: is the bound 4.a of Clerico et al. truly the tightest? It seems that Wu et al. 2022 improved on this. - I understand that the current experiments aim to express the interest of the proposed meta algorithm, but I remain doubtful about the comparison with the PAC-Bayes learning objective. Indeed, the comparison with the test method is meaningful as it shows how P2L improves on classical GD. Why not taking the PAC-Bayes minimisation routine as a subroutine of P2L and compare it to PAC-Bayes without P2L (plotted in green in this work)? - Does it exist similar procedures than P2L in the literature which selects meaningful data? - What is the time complexity for plotting the bounds and running P2L comparing to other methods? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: First of all, we would like to thank the Reviewer for their valuable time and feedback on the work, which we found useful and to the point. **Questions** * “To perform their experiments…”: Considering the small-data regime was motivated both by applications and by the fact that, in these settings holding out a portion of the training set for testing deteriorates the post-training performance. At the same time, we completely agree with the Reviewer that there are many learning problems for which large amount of labelled data is available (including MNIST). As suggested, we have performed similar experiments to the case where all three methods are applied the 60000 datapoint simultaneously (employing a pre-training fraction of 0.5), with similar results to those found when using a dataset of size 1000. In particular, the bound / post-training performance achieved are included in the table below.\ Regarding the bounds: In the present setting, the generalization bound attained by test-set is slightly better compared to that achieved by P2L, and P2L's bound still outperform Pac Bayes’. Regarding the post-training performance: once again P2L and SGD+test-set provide post-training performances that are slightly superior to Pac-Bayes. More precisely, in the present setting P2L slightly outperforms SGD+test-set. These results are due to the fact that, in this regime, leaving out datapoints from the training phase (as done with SGD+test-set and a pre-training fraction of 0.5) has little cost, since the size of the training dataset remains large enough to reach (almost) the optimal model. We aim to clarify these points in final version of the manuscript. | Method | Risk bound | Post-training performance | |--------------|------------|---------------------------| | P2L | 2.31% | 0.99% | | Pac-Bayes | 2.54% | 1.65% | | SGD+Test-set | 1.65% | 1.52% | * “Appendix B: is …”: The work of Wu et al. 2020 is certainly interesting, as it provides novel concentration bounds for random variable with ternary values, e.g., with values in {-1,0,1} as opposed to the classical binary setting of {0,1}. At the same time, we note that our work considers an appropriateness criterion that correspond to a binary loss function (either h is appropriate or not). In the binary settings, the same authors observe that their novel bound behaves similarly to existing kl-bounds [Wu et al, lines 5/6 of Introduction, lines 3/5 of Discussion]. Still, we find the work of interest and make sure to mention this point in our final version of the manuscript. * “I understand that …”: We thank the Reviewer for the insightful comment. We wish to emphasize that the aim of our work is to show that, by using P2L, we can learn a hypothesis that has both post-training performance and risk bounds that are equally good or better compared to state of the art PAC-Bayes or SGD+test-set. In this respect, P2L is not designed with the aim of improving the performance of *any* internal algorithm, but rather that of providing *an* internal algorithm with a generalization bound -- when this is not readily available -- while maintaining a desirable post-training performance. Thus, we believe our comparison to be meaningful and our numerical analysis to support this statement. Certainly, it is interesting to investigate whether P2L can improve the performance of other inner algorithms, in particular P2L+PAC-Bayes, as suggested by the Reviewer. This is a direction we will investigate. * “Does it exist …”: There certainly is a growing body of literature referred to as “data compression” / “dataset selection” / “coreset selection”, e.g., [A,B,C] that aims at reducing a given dataset’s size while ensuring that the performance of the resulting trained models is comparable. However, these works are different both in their motivation (compressing the dataset is motivated by a computational issue), and in their results. Indeed, none of these works produces a compression scheme in the sense of the classical definition (i.e., such that the model’s output is identical), and certainly they do not enforce the property of *preferent* compression. Both of these properties are crucial to derive a generalization bound such as ours, and we believe our work to be the first -- to the best of our knowledge. * [A]: Dataset pruning: reducing training data by examining generalization influence. * [B]: Deep Learning on a Data Diet: Finding Important Examples Early in Training. * [C]: An Empirical Study of Example Forgetting during Deep Neural Network Learning * “What is the time …”: The following are running times for one instance of MNIST, i.e., a single dataset with 1000 datapoints (pretraining factor =0.5) on the following machine: MBPro 2021, Apple M1 Pro CPU, 32gb ram: | Method | Execution time | |--------------|:--------------:| | P2L | 2min 1s | | Pac-Bayes | 4min 4s | | SGD+Test-set | 0min 5s | --- Rebuttal Comment 1.1: Title: Thank you for your reply Comment: I thank the authors for their careful reply. Most of my concerns are addressed with your rebuttal. I have only a remaining concern about the relevance of comparison in PAC-Bayes. Indeed, I did not understand why you affirm that 'P2L is not designed with the aim of improving the performance of any internal algorithm, but rather that of providing an internal algorithm with a generalization bound'. To me, a meta algorithm is not an internal one. In the context of Algorithm 1, I would call the learning algorithm $L$ an internal procedure. As PAC-Bayes learning aims to provide such a learning algorithm $L$, it seems odd to compare PAC-Bayes to P2L +GD instead of PAC-Bayes with PAC-Bayes + P2L. That being said, I remain an enthusiast about P2L as it is time-efficient and comes with strong theoretical guarantees, but it seems still unfair to me to claim that 'P2L dominates the PAC-Bayes approach' (l.252) as you are comparing a PAC-Bayes learning algorithm with a learning algorithm (GD) enhanced with P2L: P2L and PAC-Bayes are not incompatible, and PAC-Bayes + P2L could solve the fact that in PAC-Bayes, 'the posterior training does not exploit the extra data to improve the model, but rather to certify it.' (l. 260). Did I miss something? PS: Please be sure that I am not necessarily asking for new experiments if those are time-consuming, there are apparent strengths of P2L with respect to the naive PAC-Bayes approach, but to me, it seems strange to compare a learning algorithm with a meta one. --- Reply to Comment 1.1.1: Comment: We would like to thank the Reviewer for the clarification. Our goal was indeed that of comparing P2L+GD against Pac-Bayes and not P2L in itself against Pac-Bayes. Indeed, while P2L is a meta-algorithm, the composition of P2L with GD (P2L+GD) returns a learning algorithm, which then can be compared with other learning algorithms to check what performances P2L+GD offers, both in terms of post-training performance and of enabling generalization bounds. We apologize if this was not conveyed properly. Following up on this, our motivation for comparing P2L+GD with PAC-Bayes is that PAC-bayes algorithms have recently pushed the state of the art in terms of being able to learn a good hypothesis while simultaneously certifying it. In this sense, we put P2L+GD at the same level of PAC-Bayes algorithms: both generate a hypothesis and a risk bound, which we are interested in comparing. We also agree with the Reviewer that one could consider running P2L+Pac-Bayes, however we had not considered this as PAC-Bayes algorithms already provide in themselves a risk bound without needing to be used as inner algorithms of P2L.
Summary: In this paper, the authors present a novel framework called P2L, which aims to derive generalization guarantees for black-box supervised learning algorithms. P2L operates as a meta-algorithm that utilizes a learning algorithm to induce a compression scheme. The algorithm relies on two main components: a *criterion of appropriateness* and an *appropriateness threshold*. The criterion of appropriateness serves as a generalization of the loss function, measuring how well a hypothesis describes a given data point. The appropriateness threshold determines when the meta-algorithm terminates by ensuring that the hypothesis is appropriate for all unselected data points. At each step, the algorithm selects the example with the highest loss, according to the criterion of appropriateness, from the set of selected training data ($T$). This example, denoted as $\bar{z}$, is added to $T$, and the learning algorithm produces a hypothesis ($h$) based on $T$. Using this new hypothesis, a new $\bar{z}$ is selected, and the process iterates until h becomes appropriate for all examples in the original dataset ($D_S$). An important property of P2L is that running the algorithm on $T$ yields the same hypothesis as running it on the full dataset $D_S$. Consequently, $T$ effectively compresses $D_S$, similar in spirit to dataset distillation or identifying core examples within a dataset. To provide generalization guarantees, a theorem demonstrates that the cardinality of $T$ can be utilized to derive tight upper bounds on a suitable measure of statistical risk. The effectiveness of P2L is evaluated through experiments on binary MNIST and synthetic regression datasets. The results indicate that P2L outperforms the PAC-Bayes bound and performs competitively with using a hold-out set, all without requiring additional data. Strengths: - To the best of my knowledge, the proposed algorithm is novel and quite elegant (computational complexity notwithstanding). It is another realization of the principle “compression is intelligence”. - I find the technical tools used interesting and believe that this framework can potentially lead to many future works. - The resulting bounds are tighter than PAC-Bayes bounds. - The presentation of the algorithm is very clean and easy to follow. Most technical concepts are explained carefully and illustrated with examples. This is quite rare for theory papers. - The algorithm exhibits some very interesting properties that are potentially desirable for learning algorithms. (*"the misclassification on the test-set for P2L is constant across all prior/train portions"*) Weaknesses: - In contrast to many existing generalization bounds, P2L can only be used for risk certification, that is, it does not offer an immediate way to make the bound tighter (which may improve the performance of the model) nor does it provide an understanding of why particular algorithms or architecture work well. Of course, this is partially due to the fact that P2L is designed to be as general as possible (i.e., black-box) but depending on the goal, the latter is sometimes more important than the risk certification itself. - The algorithm seems not particularly efficient which limits its practicality, especially for deep learning with very large datasets. If the model is trained from scratch at every iteration, the complexity could be quadratic in the number of data points (i.e., $N^2$). On the other hand, it seems more suited for applications where the dataset is small. Perhaps it's better to phrase the paper in that direction. If this is not the case, please correct me. The authors do already address this point in the conclusion but note that PAC-Bayes generalization bounds for deep learning often use Gaussian posterior which is easy to sample from and empirically the estimate concentrates quite fast. Furthermore, these bounds can be made deterministic (Nagarajan et al., 2018). - Related to the previous point, the paper would benefit from more empirical evaluation such as those in (Lotfi et al., 2022) who provided the SOTA PAC-Bayes bounds for several benchmarks. The field of PAC-Bayes bounds for deep learning has made significant progress in the past couple of years and only having binary MNIST results makes it hard to judge the empirical value of the proposed algorithm. - The paper is generally well-written and clear but there are several important clarifications needed (see questions). **Reference** - PAC-Bayes Compression Bounds So Tight That They Can Explain Generalization. Lotfi et al., 2022 - Deterministic PAC-Bayesian generalization bounds for deep networks via generalizing noise-resilience. Nagarajan et al., 2018 Technical Quality: 3 good Clarity: 3 good Questions for Authors: - The role of $h_0$ is somewhat unclear to me from the text. What does "*pre-training $h_0$ on different portions of the dataset, again through GD*" mean (line 235)? Why is the model not initialized from scratch? In algorithm 1, where is $h_0$ used? Is it only used for selecting the initial $\bar{z}$? - Why did you use GD instead of SGD? Wouldn’t this result in worse models? How do the bounds compare if you use SGD? - In the inner loop, does the training have to be done from scratch or can it rely on the hypothesis from the previous iteration? If it is done from scratch, it feels like the algorithm would be extremely impractical since every iteration has to train a new neural network. If it is done starting from the weights from the previous iteration, wouldn’t the weights be prone to overfitting, especially if the problem is non-convex? - Is post-training performance just the test performance? If so, could you elaborate on the following statement “*utilizes all data to jointly learn a good model and provide a risk bound*"? Do you have a theoretical justification for this statement? It feels like this could have some relationship to curriculum learning or boosting. - Is there a relationship between $\gamma$ and the notion of margin? Why does $\gamma$ not show up in the bound? Is the dependency implicit in $T$? - Is the total ordering determined by the most recent hypothesis $h$ or the meta-algorithm? There seem to be some conflicting statements in the paper. “$[T]_A$ *is a list that contains the elements in* $T$ *ordered according to the order in which they are selected by P2L*” (line 150) but in the conclusion *“a hypothesis-dependent total order used to select which data points are fed to the learning algorithm L.”* (line 346). In a similar vein, how is the ordering used in the experiment since GD does not care about the order of the data at all? - Does this framework have a relationship to the algorithmic stability framework? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See the weakness section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the Reviewer for their positive review and constructive comments. We include our responses below, which we hope can help better understand and position our paper. **Weaknesses** * “In contrast…”: while it is true that our bound cannot be directly optimized, the size of the compression set T (and the corresponding bound on the risk) are informative indicators of whether the chosen architecture had worked well. When this is not the case, the flexibility of P2L allows to accommodate and test a different inner algorithm with the goal of improving the result. * “The algorithm seems…”: we agree with the Reviewer that P2L is, in its current form, more suited to offline problems with dataset of relatively small size – a point that we will emphasize more. However, please also note that, in situations where P2L works well (i.e. a small size of the compression is achieved), P2L halts at early stages, during which the inner algorithm can be executed efficiently since it is run over a smaller portion of the dataset. **Questions** * “The role of $h_0$…”: As you pointed out, $h_0$ is used to select the first $\bar{z}$ to be put in T. From that step on, $h$ is computed via the inner learning algorithm fed with T. As natural, note that $h_0$ has an impact on the algorithm: if $h_0$ is already good, then the algorithm just needs to refine it and will likely terminate at early stages with a small size of the compression set; if $h_0$ is poor, P2L may need some additional iterations to construct a set T from which to obtain a sensible hypothesis. In this respect, pre-training is introduced solely to guide the selection of $h_0$. In particular, a portion of the training dataset is used to generate a sensible $h_0$ to start from, and then P2L is used on the remaining data. We apologize if this was not clear and will work to better explain this point in the final version of the paper. * “Why did you use GD…”: We tried both GD and SGD and they gave similar results. We will mention this in the paper. * “In the inner…”: the generality of P2L allows one to choose either of the following approaches: one can either re-train a hypothesis from scratch at every iteration or keep the previously obtained hypothesis as initialization (this was our choice in the simulations). The two implementations can be seen as slightly different inner learning algorithms for P2L. Interestingly, the certification on the risk that we provide allows one to understand whether overfitting has arisen (in which case the risk certification will turn out to be close to 1) and therefore to opt for a different approach, or inner algorithm altogether. * “Is post-training…”: yes, by post-training performance we mean the performance on a test dataset. To be precise, we use a test dataset to assess the actual performance of the three methods (e.g., 10000 examples for MNIST), in addition to the dataset D employed by each of the methods. At the same time, we note that, while P2L and PAC-Bayes are allowed to utilize the whole dataset D to optimize the hypothesis, SGD+test-set is forced to further divide D in D1 and D2 where D1 is used for training and D2 to return a certification of the risk. The sentence you mentioned is motivated by the experimentally verified fact (see Figure 2) that P2L always obtains an actual performance (as measured on the hold-out dataset) which coincides with that obtained by SGD+test-set when one takes D1 = D (the “good model” in the sentence). For this model, SGD+test-set cannot provide any risk certification since D2 is empty, while P2L instead is able to provide meaningful risk bounds. We apologize if this was not clear and aim to further discuss this point in the final version of the paper. * “Is there a relationship...”: within our setting, the “margin” is the difference between gamma and the distance of the worst example from the chosen model. As correctly pointed out by the Reviewer, the risk depends on gamma, but gamma does not appear in the bound because the dependence is implicit in T. Indeed, T and the compression size depend on the chosen value of gamma: for example, if gamma is selected to be very large, the corresponding compression T will likely be small because a coarse model will be sufficient to accommodate the points within a (large) threshold of gamma. * “Is the total ordering…”: there are two different ordering coming into play here, and we apologize if this was not clearly conveyed. On the one hand, we have an ordering relation over the datapoints in $D_s\setminus T$ that depends on $h$, and which is used to select the worst example for the present $h$ to augment T. On the other hand, the order of the elements in the list $[T]_A$ simply refers to the position of the elements in the list and is solely used to enable inner learning algorithms that depend on the order with which they are fed with data, e.g., the output of SGD depends on the order with which the points in the dataset T appear. In case of order-of-feeding independent algorithms, like e.g. GD, the order of the elements in $[T]_A$ is simply ignored by the algorithm and has no effect on the final result. * “Does this framework…”: While the framework is not related to the algorithmic stability framework of [A], the notion of stable compression schemes (introduced by [B] and recently reconsidered in [C]), which is equivalent to the notion of preferent compression scheme, is instead central. Many thanks for the question, we will mention this equivalence in the paper. **References** * [A] Bousquet & Elisseff. Stability and Generalization, JMLR, 2022 * [B] V. Vapnik and A. Chervonenkis. Theory of Pattern Recognition, 1974 * [C] Hanneke & Kontorovich, Stable sample compression schemes: new applications and an optimal SVM margin bound. PMLR, 2021 --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. Most of my concerns have been addressed. I intend to keep my current rating. I have one more question: > the certification on the risk that we provide allows one to understand whether overfitting has arisen Can you elaborate on this point? And also can you discuss the trade-off of retraining from scratch vs not in the inner loop? --- Reply to Comment 1.1.1: Comment: * Regarding diagnosing overfitting: The point we wished to convey here — perhaps too concisely — is that, by providing the final hypothesis with a generalization guarantee, our risk bound also indirectly informs on how effective the selection of model and the training procedure have been. This, in turn, can help diagnose instances in which overfitting has arisen. For example, instances in which the loss on the training dataset is small, but the generalization risk provided by our bound is high are immediate candidates for overfitting. We thank the Reviewer and will mention this point in the final version of the manuscript. * Regarding retraining from scratch vs not: This is another interesting point, and indeed there is a potential tradeoff to be uncovered. On one hand, the rational for not retraining from scratch at every iteration is twofold. First, there is a potential computational advantage as fewer gradients steps are often needed to update the hypothesis since we start from a reasonably good one already. Second, such an approach often allows for the hypothesis $h_k$ to be less sensitive to the addition of one datapoint. This is typically helpful in the last stages of the algorithms where there isn’t much to be learned and the main goal is that of terminating quickly, a task which would be more difficult if the hypothesis was to change significantly. On the other hand, re-training from scratch also has advantages, mostly in that the algorithm is given full flexibility in moving towards a better hypothesis, an aspect that is often convenient in the first phases of P2L. Overall, we have tested both solutions and found that, for the problems considered, the first approach has an advantage over the second.
null
null
Rebuttal 1: Rebuttal: We would like to take this opportunity to thank all the Reviewers and PCs for their valuable time and feedback. Below, we address each of the reviews individually, while we attach here a pdf containing additional figures used in the responses. Pdf: /pdf/3db9681c4a51af7716dd350ec5327c26614d9db5.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Uncoupled and Convergent Learning in Two-Player Zero-Sum Markov Games with Bandit Feedback
Accept (poster)
Summary: The paper studies the problem of designing uncoupled learning dynamics that provably converge to Nash equilibria in two-player zero-sum Markov games. As a preliminary result, the paper introduces the first dynamics that converge last iterate in self play in matrix games with bandit feedback. Then, the paper shows how to extend such last-iterate-convergent learning dynamics to the case of irreducible Markov games. Finally, the paper studies the case of general Markov games, proving that a variation of the previously-introduced learning dynamics achieve convergence according to a newly introduced definition of convergence (called path convergence). Strengths: The problem studied in the paper is interesting and it has received considerable attention over the last years. The paper is well written and easy to follow. Weaknesses: A major weakness that I see is that the results presented in the paper seem minor adaptations and different analysis of already known techniques/tools. The authors should focus more on discussing the novelty of their approach. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please resolve my concerns in the weaknesses part. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments! We address your concerns below. *Q: A major weakness that I see is that the results presented in the paper seem minor adaptations and different analysis of already known techniques/tools. The authors should focus more on discussing the novelty of their approach.* A : Although our algorithms share similarity with previous works that also use entropy regularization, we believe that both the design and the analysis of the algorithms are novel and non-trivial. To the best of our knowledge, all previous entropy regularized two-player zero-sum Markov game algorithms are coupled (e.g., [1,2,3]), while ours is the first that achieves uncoupledness under entropy regularization. We will further discuss this by comparing our algorithms to those in [3], highlighting the new technical challenges we encounter. The entropy-regularized OMWU algorithm in [3] is tailored to the full-information setting and their value function updates require both players' entropy information: $ V^{t+1}(s)=(1 - \alpha_{t+1})V^t(s) +\alpha_{t+1} ( x^{t+1}(s)^\top Q^{t+1}(s)y^{t+1}(s) + \tau \phi(x^{t+1}(s)) - \tau \phi(y^{t+1}(s))$. This necessitates both players to know the entropy value of the other player's policy, which is unnatural. Indeed, the authors explicitly present the removal of this information sharing as an open question: *[the introduction of entropy regularization requires each agent to reveal the entropy of their current policy to each other, which prevents the proposed method from being fully decentralized. Can we bypass this by dropping the entropy information in value learning? We leave the answers to future work.]* We answer this open question affirmatively by giving a fully decentralized algorithm for zero-sum Markov games with provable last-iterate convergence rates. In Algorithm 2, the update of the value function $V$ is simple and does not require any entropy information: $V_{t+1}^{s_t} \leftarrow (1-\alpha_\tau)V_t^{s_t} + \alpha_\tau \left(\sigma_t + \gamma V_t^{s_{t+1}}\right)$. This modification results in a discrepancy between the policy update and the value update. While the policy now incorporates a regularization term, the value function does not. Such a mismatch is unprecedented in earlier studies and necessitates a non-trivial approach to resolve. Additionally, Algorithm 2 operates on bandit feedback instead of full-information feedback, presenting further technical challenges [1] Cen, Shicong, Yuting Wei, and Yuejie Chi. "Fast policy extragradient methods for competitive games with entropy regularization." NeurIPS 2021. [2] Ziyi Chen, Shaocong Ma, and Yi Zhou. Sample efficient stochastic policy extragradient algorithm for zero-sum markov game. ICLR 2021. [3] Cen, Shicong, Yuejie Chi, Simon S. Du, and Lin Xiao. "Faster last-iterate convergence of policy optimization in zero-sum Markov games." ICLR 2023. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their response. They convinced me that there is indeed several novel components in their analysis. Thus, after having a look back to the paper and to the other reviews, I decided to raise my score accordingly.
Summary: The paper introduces a new algorithm for learning in two-player zero-sum Markov games based on prior work that is uncoupled (agents only need their own reward as feedback), convergent (to NE) and rational. The result is an algorithm that is similar in concept to a single agent RL algorithm, but guarantees convergence if both players use the same algorithm. For their initial algorithm, there is an additional assumption of irreducibility on the Markov games, and it achieves last iterate convergence to the Nash with rate O(t^{-1/9+eps}). Meanwhile, for general 2 player zero sum Markov games without additional assumptions, a modification of the algorithm that applies optimism achieves O(t^{-1/10}) path convergence rate. Strengths: The algorithm proposed takes inspiration from prior works and makes useful modifications that give practical and theoretical advantages. The structure of the results and explanation of analysis also flows well and is easy to read despite the technical complexity. The authors are also very clear in their motivation and I find the literature review to be quite comprehensive. Weaknesses: The paper does a good job of setting up motivation and prior ideas, but in my opinion the key takeaway from this work is that there is now a truly uncoupled and convergent algorithm for general Markov games, which is in my opinion the paper's most significant contribution. However, there is too much space dedicated to matrix and irreducible Markov games, when in my view, the section on general Markov games should be greatly expanded. In addition to the overview of the analysis, I am interested to know how optimism can be leveraged in this type of algorithm, and the thought process behind the modifications to Algorithm 3 compared to Algorithm 2. As a suggestion, perhaps part of the analysis overview for Thm 3 can be repurposed and moved forward to add to the explanation about Alg 3, which would improve the flow of the section in my opinion. Finally, there is a lack of experimental results which would make help frame the results better in comparison to prior work. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Have any experiments been run using Algorithms 1, 2 and 3 to compare them to current SOTA methods? If so, how does the empirical performance compare? It seems that your algorithms would scale well with larger games since players only use local information, how does this compare to existing methods? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the paper's limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive and constructive feedback! We will add more discussions on the optimism technique in Algorithm 3 in the revised version. We address your question below. *Q: Have any experiments been run using Algorithms 1, 2 and 3 to compare them to current SOTA methods? If so, how does the empirical performance compare? It seems that your algorithms would scale well with larger games since players only use local information, how does this compare to existing methods?* A: To our knowledge, our work presents the first provable algorithms that are uncouple, convergent, and have finite rate under bandit feedback. Therefore, there is no comparable previous state of the art in our setting. However, if we only care about computing Nash equilibrium under bandit feedback (without caring about uncoupledness and convergence), there are a few algorithms that have faster rates than our work. For example, the V-Learning algorithm [1] has a faster $O(t^{-1/2})$ *average-iterate* convergence rate but has no last-iterate/path convergence guarantee. We do think that it is a very interesting future direction to evaluate the empirical performance of our algorithms in applications such as Game AI [2]. [1] Jin C, Liu Q, Wang Y, et al. V-Learning--A Simple, Efficient, Decentralized Algorithm for Multiagent RL[J]. arXiv preprint arXiv:2110.14555, 2021 [2] Perolat J, De Vylder B, Hennes D, et al. Mastering the game of Stratego with model-free multiagent reinforcement learning[J]. Science, 2022, 378(6623): 990-996. --- Rebuttal Comment 1.1: Title: Response to Author Rebuttal Comment: Thanks for the answer to my question. If the changes suggested by the authors is implemented I think this is a good paper worthy of acceptance. Best regards, Reviewer EDuK
Summary: The paper studies the last iteration convergence of uncoupled learning in two player zero-sum markov game, with bandit feedback. The paper provide the first finite last-iterate convergence guarantee under the bandit feedback. The paper studies the problem of two-player zero-sum Markov game and designs a new uncoupled learning algorithm that enjoys last iteration convergence, under bandit feedback. Along the way, the paper derive a couple of results, summarized below: (1) Even for the standard matrix game (where there is no Markov transition), the paper provides an algorithm with $O(T^{-1/8})$ convergence rate (2) For irreducible Markov game, the paper provides an algorithm with $O(T^{-1/9})$ convergence rate (3) For general irreducible Markov game, the paper provides an algorithm with $O(T^{-1/10})$ convergence rate, but only under the notion of path convergence. It is worth noting that there is a large body of literature, but previous work [WLZL21] are either not uncoupled, or [BJY'20 ] not last iterate convergent. The technical part seems novel, from a high level, the results are obtained by adding an entropy regularizer, but there are many subtle details to make it really work and the analysis is involved. ------------------ I have read the author response and want to keep my positive evaluation. Strengths: The paper studies a fairly popular topic and provide the first finite last-iterate convergence guarantee for Markov game (two-player, zero-sum). The technical contribution seems to be novel. The paper is also well-written and the technical part is well explained. Weaknesses: No. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: I have a few minor questions: (1) Can you provide some intuition on the number of $T^{-1/8}$ or $T^{-1/9}$, ideally, it would be nice if one can written down a short explanation on how these magic exponent comes from. (2) The following paper seems very relevant? [1] Regret Minimization and Convergence to Equilibria in General-sum Markov Games Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: no. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your very positive comments! We address your questions below. *Q1: Can you provide some intuition on the number of $T^{-1/8}$ or $T^{-1/9}$, ideally, it would be nice if one can written down a short explanation on how these magic exponent comes from.* A: These exponents are a result of the parameters $k_\eta, k_\beta, k_\epsilon$ (with $\eta_t = t^{-k_\eta}$, $\beta_t = t^{-k_\beta}$, $\epsilon_t = t^{-k_\epsilon}$) we choose to optimize the convergence rate. For example, the analysis of Algorithm 1 (see Appendix B) shows that the last-iterate convergence rate (ignoring log factors and other dependence) is $O(t^{\frac{-k_\eta + k_\epsilon}{4}} + t^{\frac{k_\beta}{2} - k_\eta} + t^{\frac{-k_\beta+k_\epsilon}{2}} + t^{\frac{-k_\eta +k_\beta}{2}} + t^{\frac{-1+k_\eta +k_\epsilon}{2}} + t^{-k_\epsilon})$. The choice of $k_\eta = \frac{5}{8}, k_\beta = \frac{3}{8}, k_\epsilon = \frac{1}{8}$ gives us the optimized $O(t^{-\frac{1}{8}})$ rate. *Q2: The following paper seems very relevant? [1] Regret Minimization and Convergence to Equilibria in General-sum Markov Games* A: Thank you for pointing this out! We will add discussion about this paper in the revised version of our paper. Erez et al. (2022) [1] studies regret minimization in general-sum Markov games and provide algorithm with sublinear regret under self-play and *average-iterate* convergence to equibria, while our work focuses on *last-iterate* convergence rates to Nash equilibrium.
Summary: This paper studies the problem of learning a Nash equilibrium in two-player zero-sum Markov games with bandit feedback. The proposed algorithm introduces the entropy regularization technique into online mirror descent. First, the author proves that the proposed algorithm converges to an equilibrium in two-player normal-form games. Then, the last-iterate convergence rate for irreducible Markov games is provided. Finally, the paper presents a path convergence rate for Markov games without the irreducible assumption. Strengths: * The problem is well-motivated. In many scenarios, the last-iterate convergence property under bandit feedback is more suitable than the average-iterate convergence property. * It seems novel to derive the last-iterate convergence rates under bandit feedback. * The proof sketch of Theorem 1 is intuitive and easy to follow. Weaknesses: * Since action probabilities of strategies in $\Omega_t$ are lower bounded by $\frac{1}{At^2}$, line 6 in Algorithm 1 seems to have no closed-form solution. I wonder how much computational cost it will take to update the strategy. * The term of $\ln^{20}(SAT/\delta)$ in Theorem 3 depends heavily on $T$. This term would be dominant practically and would not be able to be ignored. * I could not completely follow the sketch of Theorem 3. I would have appreciated more detail. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * How much computational cost will it take to update the strategy (e.g., line 6 in Algorithm 1)? * Does the path convergence imply the average-iterate convergence? * What prior knowledge of the game is required for setting of $\epsilon,\beta$, and $\eta$ in Theorem 3? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive and constructive comments. We will include a more detailed proof sketch of Theorem 3 in the revised version. Your questions are addressed below. *Q1:How much computational cost will it take to update the strategy (e.g., line 6 in Algorithm 1)?* A: To update the strategy, the main computation happens in line 6 where we need to solve the following optimization problem over $\Omega_t=\{x \in \Delta^{|A|}: x[i] \ge \frac{1}{|A|t^2}\}$: $x_{t+1} = \arg \min_{x \in \Omega_t} (g_t^\top x + \frac{1}{\eta} D(x, x_t))$. By the KKT condition, we have $$x_{t+1}[i] = \frac{x_t[i] \cdot \exp(-\eta_t(g_t[i] + \lambda_i))}{\sum_{i'\in[|A|]} x_t[i'] \cdot \exp(-\eta_t(g_t[i'] + \lambda_{i'}))},$$ where $\lambda_i = 0$ if $x_{t+1}[i] > \frac{1}{|A|t^2}$ and $\lambda_i\le 0$ if $x_{t+1}[i] = \frac{1}{|A|t^2}$. The computation of feasible $(x_{t+1}[i], \lambda_i)$ can be done in $O(|A|\log |A|)$ time as explained below. For simplicity, let us denote $h[i] := x_t[i]\cdot\exp(-\eta_t g_t[i])$. The algorithm works as follows: 1. sort $i$'s based on the value of $h[i]$ in increasing order and let us assume w.o.l.g. that $h[1] \le h[2] \ldots \le h[|A|]$; 2. find $j \in [|A|]$ such that $\frac{j\cdot h[j]}{j\cdot h[j] + \sum_{i=j+1}^{|A|} h[i]} \le \frac{j}{|A|t^2}$ and $\frac{(j+1)\cdot h[j+1]}{(j+1)\cdot h[j+1] + \sum_{i=j+2}^{|A|} h[i]} > \frac{j+1}{|A|t^2}$; if such $j$ does not exist, then we can set $\lambda_i = 0$ for all $i$ and compute $x_{t+1}$. 3. compute $\theta$ such that $\frac{j\cdot \theta}{j\cdot \theta + \sum_{i=j+1}^{|A|} h[i]} = \frac{j}{|A|t^2}$ and set $x_{t+1}[i] = \frac{1}{|A|t^2}$ for $1 \le i \le j$ and $x_{t+1}[i] = \frac{h[i]}{j\cdot \theta + \sum_{i=j+1}^{|A|} h[i]}$ for $j+1 \le i \le |A|$. Sorting in step 1 can be done in $O(|A| \log |A|)$ time and the computation in other steps can be done in time $O(|A|)$. *Q2: Does the path convergence imply the average-iterate convergence?* A: Path convergence does not imply averge-iterate convergence. Path convergence concerns visited states $\{s_t\}$ only and the policies on states that are not reached in learning can be arbitrary, so it does not imply average-iterate convergence. Similarly, path convergence does not imply best-iterate convergence. However, we would like to remark that path convergence excludes the possibility of cycling and has many interesting game-theoretical implications (see our discussion in section 6.1 and Appendix F). *Q3: What prior knowledge of the game is required for setting of $\eta$, $\epsilon$, and $\beta$ in Theorem 3?* A: The prior knowledge of the game needed for setting $\eta$, $\epsilon$, and $\beta$ is the number of states $S$, the number of actions $A$, and the discounting factor $\gamma$. See Appendix G (page 39) for the concrete choice of these hyper parameters, and we plan to include it in the main body in the revised version.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper studies algorithms for two-player zero-sum Markov games that are uncoupled, convergent, and rational. Previous attempts at designing such algorithms failed short in one aspect or the other. This work uses recent advances in entropy-based regularization to design new algorithms that overcome the inherent challenges. The other main contribution of this work is deriving the convergent rates of the proposed algorithms in Matrix, Markov, and general Markov games. Strengths: 1. The proposed algorithms do not require the two players to exchange information (including that related to entropy) 2. The proposed ideas extend to general Markov games without any assumptions on the dynamics. 3. The work seems technically sound. The algorithms follow easy-to-implement modifications of existing methods. 4. The problem setup, the algorithms, and the proofs are written in an easy-to-digest format. Weaknesses: See Questions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How tight are the obtained convergence rates? 2. My understanding of the proposed algorithm is that at time step $t,$ player 1 plays action $a _t$ and player 2 action $b_t.$ Thereafter, the local update steps are carried out. If this is correct, then why do the authors claim that their proposed algorithm does not require the two players to have synchronized policy updates? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors do address the limitations in their work adequately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive comments. We address your questions below. *Q1:How tight are the obtained convergence rates?* A: The obtained convergence rates in this paper may not be tight. For example, Algorithm 1 achieves a high-probability $O(t^{-1/8})$ last-iterate convergence rate under bandit feedback for matrix games, while the known best lower bound of convergence rate in this setting is $\Omega(t^{-1/2})$. However, we remark that no last-iterate/path convergence *rates* are known in the considered setting (i.e., uncoupled algorithm with bandit feedback) prior to our work. Improving the convergence rates and establishing matching lower bounds are very interesting future directions. *Q2: My understanding of the proposed algorithm is that at time step $t$ player 1 plays action $a_t$ and player 2 action $b_t$. Thereafter, the local update steps are carried out. If this is correct, then why do the authors claim that their proposed algorithm does not require the two players to have synchronized policy updates?* A: Thank you for pointing this out. Our interaction follows the standard model in the literature, where both players simultaneously choose an action in each round. This is not what we meant by ``synchronized policy updates.'' By "synchronized policy updates," we refer to the method in [1] where each player is directed to use a fixed policy when interacting with others for an extended period. Afterward, all players update their policies simultaneously. We find such coordination between the players unnatural, and the subsequent algorithm no longer maintains the no-regret guarantee against adversaries. This drawback is also pointed out by [2] (see their second bullet point on Page 3). In contrast, the algorithm in our paper uses a very simple policy update rule and is robust (no-regret) even against an adversary. We will add the above explanation to the revised paper. **References:** [1] Wei, C. Y., Lee, C. W., Zhang, M., & Luo, H. Last-iterate convergence of decentralized optimistic gradient descent/ascent in infinite-horizon competitive Markov games. COLT, 2021 [2] Sayin, M., Zhang, K., Leslie, D., Basar, T., & Ozdaglar, A. Decentralized Q-learning in zero-sum Markov games. NeurIPS, 2021 --- Rebuttal Comment 1.1: Comment: Thank you for your response. The gap between the lower bound and the obtained upper bound iss extremely high. It may be useful to have a discussion, highlighting the key reasons for the gap. Secondly, the use of the terminology `synchronous' seems incorrect, and you may want to consider an alternative description. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback! 1. We will include a discussion on the gap between upper and lower bounds. We first remark that if we only care about convergence in expectation rather than high-probability bound, then we can get $t^{-\frac{1}{6}}$ last-iterate convergence rate in matrix games (see Appendix C). Towards closing the gap between upper and lower bounds, the following directions are promising. Firstly, [1] proves an impossibility result that specific algorithms with $\sqrt{T}$ regret does not converge in last-iterate, indicating that the current $\frac{1}{\sqrt{T}}$ lower bound on convergence rate may not be tight. Secondly, our results provide insights and a useful template for further improvement on upper bounds. For instance, instead of EXP3 update, adaptation of optimistic policy update or accelerated first-order methods to the bandit feedback setting might give faster rates. 2. Thank you for pointing out! We will change the term "synchronous policy update" to "coordinated policy update" in the revised version. [1] Muthukumar, Vidya, Soham Phade, and Anant Sahai. "On the Impossibility of Convergence of Mixed Strategies with No Regret Learning." arXiv preprint arXiv:2012.02125 (2020).
null
null
null
null
null
null
Polynomial Width is Sufficient for Set Representation with High-dimensional Features
Reject
Summary: Summary: The paper proves that for symmetric neural networks, specifically DeepSets, there exist exact representations for symmetric functions, where the symmetric embedding layer width can be chosen to be polynomial in the set size and input dimension, rather than exponential as shown in stricter settings. Strengths: Strengths: The proof technique is clever, managing to continuously invert a specific symmetric embedding. It’s also quite surprising, as the set of multisymmetric polynomial generators is exponentially large in N and D, which intuitively suggests that a map from the symmetric set to a strict subset of the generators wouldn’t be invertible. The insight that this map can be inverted (at the price of perhaps being quite non-smooth) is a novel one. I think this is a useful result in further understanding the capabilities of the DeepSets architecture specifically. Weaknesses: Weaknesses: I think the paper would benefit from a more robust discussion of the tradeoffs of this parameterization. In particular, in the complex setting and under some standard network assumptions, the parameterization doesn’t have exponentially large width in the symmetric embedding layer L, but it must pay exponentially large width somewhere else. For example, consider input dimension D = 1 and set size N > 1. Assume N is odd, and let z denote a N-th principle root of unity. Consider input sets x = 1/2 * (z^0, z^1, … z^{N-1}), and y = -x. One can confirm that all the power sums p_k(x) = p_k(y) = 0 for 1 <= k <= N-1, and p_N(x) = N * (1/2)^N and p_N(y) = -N * (1/2)^N. If psi_N is the map of the first N powersums as in definition 2.6, then we have d(psi_N(x), psi_N(y)) = N * (1/2)^{N-1}, but d(x,y) = O(1/sqrt(N)) (where this is using the appropriate notion of distance on sets, i.e. infinity norm modulo permutation). All this to say, the inverse map psi_N^{-1} has a Lipschitz constant that is exponentially large in N. So for a neural network of constant depth, with an activation with bounded Lipschitz constant and polynomially bounded weights, representing this function would require exponentially large width. Of course this is a toy example. But it’s extremely difficult to tell how non-smooth and nasty the parameterization given will be in general, especially when D > 1. I understand the authors leave this question to future work, but in that case I think it’s useful to discuss that there’s no free lunch, and the given argument does not guarantee an efficient network overall. More broadly, the parameterization is somewhat unrealistic. The parameterization focuses specifically on mapping the set data into a symmetric embedding, inverting the embedding, and then feeding the original set information through another parameterized function. So one struggles to see why to use DeepSets in the first place. You still need to parameterize a symmetric function somehow - it would be silly to parameterize rho itself with DeepSets, but then how would you parameterize it? The fact that, in practice, DeepSets works better than networks that don’t enforce the symmetry constraint suggests this parameterization is not a very practical model. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: N/A Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank reviewer S2JM for appreciating our proof technique and its deeper implication. We are also grateful for constructive suggestions to improve this manuscript. Please see our responses as below: **1. A more robust discussion of the tradeoffs of this parameterization.** We agree with reviewer's observation that given current construction, the inverse map $\Psi_N^{-1}$ may have an unbounded Lipschitz constant, which causes the complexity of $\rho$ may still grow exponentially. It is likely that the current construction still cannot bound the whole network size with polynomial many neurons for free, and we will add the discussion on this trade-off in our revision. **2. Parameterization is somewhat unrealistic.** We really appreciate the insights provided by the reviewer. We partially agree with the reviewer's argument but not all of them. First, this work is mainly to address an important theoretical question in this field. As the reviewer may also agree, the constructive proof may provides insights into not only just a particular architecture but also a broader idea of building homomorphism mappings for high-dim features whose order does not affect the mapping output. Regarding the practical aspect, although much practical evidence has shown that DeepSets is indeed hard to learn sometimes, there have been many NN architectures that adopt DeepSets as their key components due to its simplicity and computational friendliness. For example, when GNNs aggregate features from the neighbors of a node, the most commonly used operation follows weighted mapping + sum/average pooling + weighted mapping +... . We believe our analysis also provides some key ingredients to study the functional approximation capability of GNNs, and this topic has a wide range of applications in practice.
Summary: This manuscript proves that an embedding with polynomial width in the set size and feature dimension is sufficient to precisely reconstruct a set function, under some constraints on the embedding layer architecture. The main contribution is on the upper bound, which removes assumptions of previous studies, and reduces the gap exponentially. The scope of this study covers permutation-invariant and permutation-equivariant set functions in high-dim scenarios. The key idea of the proof is to construct the embedding vector of the claimed length, and show the injectivity of any input set elements with the proposed embedding layers. The weights are used to (1) save the original features; (2) form an 'anchor' of the set element, which 'shares' the same permutation with the set element features; (3) store the coefficients that mix the original feature and the anchor. Strengths: - The manuscript is clearly written. The problem is sound and well-motivated. - The results are significant, extending previous study to high-dimensions and exact representation of sets, in addition to tightening the upper bound from exponential to polynomial. Weaknesses: The major weakness of this study has been summarized in the limitation section already: (1) There are two important parameters in DeepSet, however only one is considered. (2) The lower bounds are trivial. (3) The upper bound depends on the specific neural network architecture. Another issue is the proof is an existential argument. That is if we find those weights, the network can precisely represent the set function. However it is not clear if the weights belong to some space as a convergence by training the network. If the authors could provide intuition on this it would be good. It took me longer than expected to understand the functionality of the three segments of the weights, especially the third one. I would suggest one more paragraph describing the mentality of the construction, together with the Eq 5,6,7. From the technique side, there is no novel technique involved, and this together with the major weakness would be my personal reasonable potential to reject this paper. But fairly speaking I think the depth is OK, and the results are sufficiently impressive. Thus I recommend accept. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Line 223 and 340 indicate the lower bound of $L\geq ND$ for the LLE architecture, it may be better to add to line 63. - In DeepSets, the $\rho$ and $\phi$ function can be implemented by several neural network layers of choice. Is function $\phi$ narrowed down to either 2 layers (LP) or 3 layers (LLE)? Or I believe at least the "linear" layer can be extended to multiple fully-connected layers. More generally, what is the flexibility in the network design during the implementation? - This is out of curiosity only, I think it would be more intrigued to study the unconditional lower bound of this problem. Theorem 2.4 seems to address this when $D=1$. What is the barrier of obtaining an $\Omega(f(N,D))$ for general $D$? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Mentioned above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank reviewer gdbo for acknowledging the depth and significance of our work. Please see our responses as below: **1. The proof is an existential argument and the convergence of training is not guaranteed.** This work focuses on expressive power analysis for DeepSets and the main goal is to show such an architecture can represent every function in some class. Such proofs are often constructive and existential similar to many other works on expressiveness analysis for neural networks [1,2,3], including DeepSets [4]. Typically, the optimization procedure for such an architecture is beyond the scope of the study of expressive power. Also, we want to emphasize that the weights for the construction of LP layer that allow good expressive power are actually dense in the parameter space, which are not one particular set of weights and are amenable to optimization. [1] Vitaly et al. Lower bounds for approximation by MLP neural networks [2] Yarotsky. Error bounds for approximations with deep relu networks [3] Yarotsky. Universal approximations of invariant maps by neural networks. [4] Zaheer et al. Deep Sets. **2. One more paragraph describing the mentality of the construction.** With the correction in our general response, the proof idea for both LP and LE layer has been unified. The high-level idea of our construction consists of two major steps: 1) construct anchor with Lemma 4.4, and 2) couple each feature channel with the anchor through the mixing scheme provided by LP and LE, repsectively. To show injectivity, we first 1) invoke Lemma 4.5 and Lemma 4.8 to induce pair-wise alignment with anchors, and then 2) apply Lemma 4.3 to obtain union alignment. We have prepared an anonymous link to the revision, which includes a paragraph and a figure to demonstrate the contruction idea and the connection to each proof techniques. **3. The proof technique is not novel.** Our work stands on the shoulder of giants and the overall proof idea follows DeepSets by constructing continuously invertible sum-pooling. However, this result cannot be achieved without new mathematical establishment. To tackle high-dimensional features, we introduce a novel mathematical device: anchor, which is essential to show the injectivity of the sum-pooling and has not been used in the relevant literature, to our best knowledge. Other reviewers also appreciate the novelty behind. Despite intuitively defined, our Lemma 4.3 shows a significant result that by pairwisely coupling each feature columns with the anchor, one can impose the global alignment, which largely reduces the complexity of the construction. Also Lemma 4.4 reveals that such a useful anchor can be easily constructed via elementwise linear mapping. Besides, we invent two pairwise coupling scheme via a special class of linear combination and monomials, corresponding to LP and LE layer, respectively. **4. What is the flexibility in the network design during the implementation?** Akin to DeepSets, both $\phi$ and $\rho$ can be implemented by neural networks using their universal approximation property. We also note that construction established by DeepSets also fixes the initantiation of $\phi$ as a power series functions (cf. Lemma 4 in [1]). Our construction serves as the extension to DeepSets' construction to high-dimensional features. [1] Zaheer et al. Deep Sets **5. What is the barrier of obtaining an $\Omega(f(N, D))$ for general $D$?** Proving the lower bound requires another existential argument to find a function which cannot be represented if $L \le f(N, D)$. In [1], authors restrict $\phi$ and $\rho$ to be analytic functions, and find an analytic function which cannot be approximated by any such $\phi$ and $\rho$ without an exponential width. However, it becomes rather challenging to find a counter-example if we relax the constraint on $\phi$ and $\rho$ to arbitrary continuous functions. In this paper, we make an attempt to derive the lower bound by fixing the architecture of $\phi$. [1] Zweig et al. Exponential Separations in Symmetric Neural Networks
Summary: The paper studies the representative properties of DeepSets models, which are networks for length-$N$ sequential modeling that apply identical neural networks to each $D$-dimensional input $x^i$ to obtain $L$-dimensional features, sum up their outputs, and apply an additional neural network to the output. While past results have tightly characterized the setting where $D = 1$, positive results for the $D > 1$ case are less explored. A recent lower bound by Zweig and Bruna shows that $L$ must grow exponentially with $\sqrt{N}$ and $D$ in order to approximate any permutation-invariant function under the condition that the networks use analytic activations. The primary contribution of the paper Theorem 3.1, which claims that there exists a construction with $L = \mathrm{poly}(N, D)$ that exactly represents any permutation-invariant target as a DeepSets model with $L$-dimensional features. Strengths: The introduction is well-written and the problem is set up in a clear way. If the result is indeed correct, I think it would be an interesting contribution to the literature on DeepSets approximation properties and the limitations of current lower bounding techniques for DeepSets. Weaknesses: In it's current form, I am concerned that the result is not correct as written. There are two primary issues, which I would like to see addressed if I am to reconsider my score for the paper. First, I think the proof of the second part of Theorem 3.1 is false as written. The proof relies on Lemma 4.9, which claims that parirwise alignment is sufficient for union alignment, and as far as I am aware, is not proved in the paper or the appendix. I believe Lemma 4.9 to be false. Consider the following counter-example. For $D = 3$ and $N = 4$, let $x_1 = (0, 1, 1, 0), x_2 = (0, 0, 1, 1), x_3 = (0, 1, 0, 1) \in \mathbb{R}^N$ and $x_1' = (1, 0, 0, 1), x_2' = (1, 1, 0, 0), x_3' = (1, 0, 1, 0)$. Note that $X = [x_1^T \ x_2^T \ x_3^T ] \not\sim X' = [x_1'^T \ x_2'^T \ x_3'^T]$, since the first row of $X$, $x^1 = (0, 0, 0)$ does not belong to any row of $X'$. However, the two are pairwise aligned. * Note that $[x_1^T \ x_2^T] \sim [x_1'^T \ x_2'^T]$, with the permutation $\sigma = (1 \ 3) (2 \ 4) $ as witness. * $[x_1^T \ x_3^T] \sim [x_1'^T \ x_3'^T]$ with permutation $\sigma = (1 \ 2) (3 \ 4)$. * $[x_2^T \ x_3^T] \sim [x_2'^T \ x_3'^T]$ with permutation $\sigma = (1 \ 4) (2 \ 3)$. Since the proof relies on this lemma, the proof of the lemma does not appear, and there exists a simple counter-example, I am unable to accept second part of the main theorem as true. Second, while I have not pinpointed any major technical flaws with the proof of the first bullet, I am a unsure how the result does not contradict the lower bound on Zweig and Bruna. The theorem claims that the LP architecture (which consists of features with polynomial activations and continuous function that inverts the embedding) with $L = O(N^5 D^2)$ is sufficient to exactly represent any continuous permutation-invariant $f: \mathbb{R}^{N \times D} \to \mathbb{R}$. Theorem 3.4 of ZB suggests that if $L \leq N^{-2} \exp(O(\min(D, \sqrt{N}))$, then there exists some analytic $g$ such that any DeepSets model $f$ with feature dimension $L$ and arbitrary NNs with analytic activations cannot approximate $g$. As far as I understand both results, the only way these two results do not represent a contradiction would be if the continuous function that inverts the embedding $\rho$ is non-analytic, and hence cannot be represented the networks described by ZB. However, if our input $x$ is from a compact set (like the complex circle, from ZB) (and hence, the aggregated polynomial features $\sum_i \phi(x^i)$ belong to a compact set) and $\rho$ is continuous, then the universal approximation results for two-layer neural nets imply the existence of a sufficiently wide 2-layer neural network with analytic activations (like the sigmoid) that approximates $\rho$ arbitrarily well. I think this would contradict the ZB result on inapproximability. All that said, it's possible that I've overlooked or misunderstood something. If there is no contradiction or if the authors believe that the result by ZB is incorrect, please let me know, and I'd be happy to discuss further. Technical Quality: 1 poor Clarity: 2 fair Questions for Authors: My questions are addressed above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 1 poor Presentation: 2 fair Contribution: 2 fair Limitations: My perceived limitations of the paper are detailed above. Beyond concerns over correctness, the theoretical results are direct about their limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank reviewer zB8R for your careful examination of our work and raise meaningful discussion. We have corrected all the flaws in our results and proofs, and all the modifications are summarized in our **general response**. Please read our detailed response as below: **1. Errors of Lemma 4.9 for LLE architecture.** After submission, we also noticed this flaw in our proof. We have corrected our proof in the new revision. Technically, instead of adopting Lemma 4.9 for union alignment, we equip our construction with anchor construction, and thus we can first utilize Lemma 4.3 to obtain $[x_i, a] \sim [x_i', a'], \forall i \in [N]$ and leverage Lemma 4.8 to reach $[x_1, \cdots, x_N] \sim [x'_1, \cdots, x'_N]$. We have prepared an anonymous link to the full version of the corrected proof. According to the rebuttal policy, we can post the link upon your request. We would sincerely appreciate it if you could leave a chance to re-assess our hard work. **2. Contradiction to ZB's result.** To our best knowledge, the universal approximation theorem with the analytic activation function (e.g., sigmoid) [1] is established over a compact disk $[0, 1]^L$. However, we note that $\rho$ is a continuous function over the domain $\mathcal{Z} = \{\sum_{i} \phi(x^{(i)}): X \in \mathbb{R}^{N \times D}\}$ whose topological structure may differ from a disk. Although $\mathcal{Z}$ is in the ambient space $\mathbb{R}^L$ and $\mathcal{Z}$ is compact if the input $X$ is from a compact space, it does not mean any continuous functions defined over $\mathcal{Z}$ can be universally approximated by NNs with analytic activations. Moreover, the function being continuous over $\mathcal{Z}$ does not mean it has a continuous extension over $[a,b]^L$, but the universal approximation theorem for analytic activations is applied to continuous functions over $[a,b]^L$. An example of missing such continuous extension is as follows: Consider a function defined on rational numbers in $[0,1]$, i.e., $Q_{[0,1]}$, $f(x)= 0$ if $x < 1/\pi$ and $f(x)= 1$ if $x > 1/\pi$. $f(x)$ is continuous in $Q_{[0,1]}$ while $f(x)$ does not have a continuous extension over $[0,1]$. Hence we doubt that universal approximation theorem with analytic functions may not be applicable to $\rho$ defined over $\mathcal{Z}$. We will include this remark into our revision. [1] Cybenko, G. "Approximation by superpositions of a sigmoidal function. --- Rebuttal Comment 1.1: Comment: > 1. **Errors of Lemma 4.9 for LLE architecture.** I appreciate the authors' detailed response, and I would be happy to review the full version of the corrected proof. Would you mind sending me the anonymous link? > 2. **Contradiction to ZB's result.** I am not an analysis expert, so it's possible that I am wrong about this. (And if any other reviewers have a background in analysis or topology, I'd appreciate their perspective.) I'm still not entirely convinced that the above argument rules out the application of the UAT to $\rho$. > To our best knowledge, the universal approximation theorem with the analytic activation function (e.g., sigmoid) [1] is established over a compact disk. There are several presentations of the UAT, and the presentation by [Hornik, Stinchcombe, and White](https://deeplearning.cs.cmu.edu/F21/document/readings/Hornik_Stinchcombe_White.pdf) applies to general compact sets of $\mathbb{R}^L$; see their Theorem 2.4. > Moreover, the function being continuous over $\mathcal{Z}$ does not mean it has a continuous extension over $[a, b]^L$. I don't believe that this is true. By the [Tietze extension theorem](https://en.wikipedia.org/wiki/Tietze_extension_theorem), a continuous function mapping a closed subset of a normal space to the real numbers has a continuous extension to the normal space. Because all compact sets are closed and $[a, b]^L$ is normal, a continuous extension exists. For the case you mentioned, $Q_{[0, 1]}$ is not compact because it is not closed. Hence, the non-existence of a continuous extension does not contradict $\mathcal{Z}$ having an extension. --- Reply to Comment 1.1.1: Title: Follow-up Response to Reviewer zB8R Comment: Dear Reviewer zB8R, We are grateful for your rigor which leads to our in-depth and meaningful discussion. **1. The full version of the corrected proof.** According to the rebuttal policy, we have sent the anonymous link to AC via an official comment only visible to him/her. We would appreciate it if AC could help share the link to all the reviewers. We apologize for the haste when prepareing this manuscripts, and any extra workload caused to you. Following subsequent rigorous efforts, we managed to produce a correct and enhanced version, which still supports our main claim while further relieving some 'strict' assumptions. **2. Contradiction to ZB's results.** We sincerely appreciate reviewer S2JM's reply, which reminds us all the results in [1] are established for the complex domain, where the existing UAT may not be applicable when limited to analytic activations. In a summary: * ZB's work [1] does not specify any results for the real domain. Since it remains intractable to extend their proof technique to real values, our results on real domain should not contradict ZB's results by any means. * Our result on complex domain still does not contradict ZB's conclusion. This is because UAT with analytic activation is not applicable to complex domain, thus, one cannot approximate the non-analytic $\rho$ with analytic neural networks. Reviewer S2JM has given an example 1/z which cannot approximated by analytic functions, while 1/z is not defined at $z=0$. $f(z) = \exp(-1/z^2)$ with $f(z)=0$ if $z=0$ is another example that is continuous over the entire complex disk but cannot be approximated by analytic functions. Note that complex analytic functions are not dense in the space of continuous functions. Overall, from these examples, we can see that ZB's results are limited to complex analytic functions and may not be applied to the real domain. Moreover, the requirement on using merely analytic activations in the complex domain is a very strong requirement, which on the other hand indicates the significance of our results that analyze the case without this requirement. We will include these important remarks into our revision. [1] Zweiga et al., Exponential Separations in Symmetric Neural Networks
Summary: This paper studies the required neural network width for representing permutation invariant functions on sets. Existing works either focus on the case where the set elements are scalars, or require exponentially large neural network width with respect to the dimensions of the set elements. This work proves that, under certain assumptions, both the upper bound and lower bound on the required neural network are polynomial in the set size and set element dimensions. Strengths: - The authors show that the bounds in this work are significantly better than existing results. - This result implies that moderately wide networks are expressive enough to represent set functions. Weaknesses: - **Lack of discussions on the practical implications of the results.** It seems that the main idea in the proof is to choose the mapping $\phi$ in a clever way. I'm curious about the practical implications of the constructions in the proof. Particularly, the authors mention _"neural networks to learn a set function have found a variety of applications in particle physics, computer vision and population statistics"_. I recommend the authors discuss how the results in this paper can provide insights and improvements on some typical network models in those areas (e.g., GNN, PointNet). - **Lack of experimental verification.** Although the bounds in this work look significantly better than existing results, experimental verification would strengthen the argument. - **Presentation issues.** I believe the authors should polish the presentation of Table 1 (comparisions with exsiting results), and the statement of the main theorem (Theorem 3.1) - Table 1: + $D+1$ and $D$ should be $N+1$ and $N$. + Use big-O notation to present the results of Segol et al. and Zweig & Brun. + I recommend to present the upper bound and lower bound separately as two columns. + The meaning of "Exact Rep." should be explained in the caption. - Theorem 3.1: + In lines 167 & 169, "For some" should be deleted as these two lines are part of the _where_ clause. + Eq. (2) $w_1x\to w_1^{\top}x$. Otherwise please specify that $w_1, ..., w_K$ are row vectors. + The main result contains both upper bound and lower bound. I recommend stating these two separately. Particularly, the lower bound should be stated as a negative result. Theorem 2.4 in [1] is a good example of how the result should be rigorously formulated. + In the LLE architecture setting, one more assumption $\mathcal{K}\subseteq \mathbb{R}_{>0}$ is required. The authors should highlight this as a premise of the theorem, instead of putting it in the _where_ clause. [1] Aaron Zweig and Joan Bruna. Exponential separations in symmetric neural networks. - **Minor errors**. - Line 28: DeepSets$\ \ $[9] $\to$ DeepSets [9] (There are two spaces between "DeepSets" and "[9]" in the submission). - Line 117: suggests $\to$ suggest. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Can you discuss how the results in this paper can provide insights and improvements on some typical network models in those areas (e.g., GNN, PointNet)? - Can you provide experimental verification of the result? - Lines 189-191 state that the assumption $\mathcal{K}\subseteq \mathbb{R}_{>0}$ is not essential. Then why not remove the assumption and present a stronger result? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The limitations have been discussed in lines 375-377. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer YiWd for acknowledging the significance of our results compared with prior arts. We will carefully proofread our paper and fix all the typos. Per your questions, please read our responses below: **1. Practical implication of the results (e.g., in GNN, PointNet).** The LP layer extends the construction of the original DeepSets by prepending a linear layer before its power series activation (Lemma 4 in [1]). In addition, we propose the LE layer (a improved version of LLE, see our general response), which leverages more commonly used components in deep learning, such as linear and exponential layers. Both GNNs and PointNet require set representation in their architecture design. GNNs learn to pass information along graph topology via a neighorhood aggregation operation at each layer of computation, which essentially corresponds to a set function [2]. PointNet processes point clouds by representing points as an unordered set and follows the DeepSets-like architecture. Our work gives a rigorous justification that moderately many neurons are sufficient to represent a high-dimension set with DeepSets. This affirms the feasibility of DeepSets architecture given high-dimensional features and explains why DeepSet-like operations can be effectively adopted in GNN and PointNet. [1] Zaheer et al., Deep Sets [2] Xu et al., How Powerful are Graph Neural Networks? **2. Lack of experimental verification.** To verify our theoretical claim, we conducted proof-of-concept experiments. Similar to [1], we train a DeepSets with $\phi$ and $\rho$ parameterized by fully connected neural networks to fit a function which takes the median over a vector-valued set according to the lexicographical order. The input features are sampled from a uniform distribution. The critical width $L$ is taken at the point where RMSE first reaches below 10% above minimum value for this set size. The relationship between $L$ and $N, D$ is plotted in the attached PDF. We observe $\log(L)$ grows linearly with $\log(N)$ and $\log(D)$, which validates our theoretical claim. [1] Wagstaff et al., On the Limitations of Representing Functions on Sets **3. Why not remove the assumption (Lines 189-191) and present a stronger result?** Thanks for this suggestion. After submission, we have been working on improving our results. The revised results are presented in our general response. We replace LLE layer with linear+exponential (LE) layer, which no longer requires the assumption on positivity feature space. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: I appreciate the authors' clarification on the practical implications and additional experiments. Regarding the theorem, it seems that the updated result still mixes the upper bound and low bound in one statement, and the whole theorem is a lengthy sentence with the key result appearing in the clause (correct me if I'm wrong). I'd recommend the authors to try to further improve the clarity. Besides, I will consider change my rating if the correctness of the new result can be checked. --- Reply to Comment 1.1.1: Title: Thanks for the reply Comment: Dear Reviewer YiWd, We appreciate you prompt reply and a reminder on the presentation form. We have sent an external link to our new results to AC. We will also take your advice to present our main results in our final version which provides better clarity. Thanks! Best, Paper 3792 Authors --- Reply to Comment 1.1.2: Comment: Dear Reviewer YiWd, We want to thank you again for the constructive comments on improving the presentation of this paper. To help you finalize your rating as the rebuttal period is ending soon, we provide the revised format of our main result following your instructions as below, which divides the upper and lower bounds into two statements. We are more than glad to hear more suggestions from you to further enhance the clarity. **[The main result]** Suppose $D\geq 2$ and given any continuous permutation-invariant function $f: \mathbb{R}^{N \times D} \rightarrow \mathbb{R}$. Consider a **continuous** mapping $\phi: \mathbb{R}^{D} \rightarrow \mathbb{R}^{L}$ such that either of the following holds: * For some $L \le N^5D^2$, $\phi$ admits *linear layer + power mapping (LP)* architecture: $$ \phi(x) = \begin{bmatrix} \psi_N(w_1^\top x)^{\top} & \cdots & \psi_N(w_{K}^\top x)^{\top} \end{bmatrix}^\top $$ for some $w_1, \cdots, w_{K} \in \mathbb{R}^{D}$, and $K = L / N$. * For some $L \le N^4D^2$, $\phi$ admits *linear layer + exponential activation (LE)* architecture: $$ \phi(x) = \begin{bmatrix} \exp(v_1^\top x) & \cdots & \exp(v_L^\top x) \end{bmatrix}^\top $$ for some $v_1, \cdots, v_L \in \mathbb{R}^{D}$. Let $\mathcal{Z} = \{ \sum_{i} \phi(x^{(i)}) : X \in \mathbb{R}^{N \times D}\} \subseteq \mathbb{R}^L$ be the range of $\phi$. Then there exists a **continuous** mapping $\rho: \mathcal{Z} \rightarrow \mathbb{R}$, such that for every $X \in \mathbb{R}^{N \times D}$, $f(X) = \rho\left(\sum_{i=1}^{N} \phi(x^{(i)}) \right)$. Moreover, if $\phi$ admits *linear layer + power mapping (LP)* architecture while $L < N(D+1)$, $f(X) \ne \rho\left(\sum_{i=1}^{N} \phi(x^{(i)}) \right)$ for any continuous $\rho: \mathcal{Z} \rightarrow \mathbb{R}$. To help check the correctness, we had already sent our amended results and proofs to AC few days back. However, as these materials have not yet been shared with the reviewers (as indicated by reviewer zB8R), we are reaching out to inquire if there are any additional pertinent details we can furnish to facilitate your assessment. Furthermore, we wish to highlight that the proof of our LP architecture presented in the current submission is inerrant. This underscores that our primary assertion remains valid, even if, in a worst-case scenario, the examination of the correctness of the LE architecture cannot be finished during the rebuttal phase. We would greatly value any additional endeavors that might contribute to a fair and hopefully more positive assessment to our work. Many thanks in advance. Best, Paper 3792 Authors
Rebuttal 1: Rebuttal: We sincerely appreciate all the reviewers for their time and efforts reviewing our paper. However, we have to apologize for the misplacements in the current submission due to a tight timeline when we prepared this manuscript, which led to an imprecise statements. After we submitted our paper, we noted the incorrectness of Lemma 4.9 (also revealed by Reviewer zB8R), so the statement on LLE in Theorem 3.1 is incorrect in its current form. Afterwards, we have been working on fixing the proof and successfully fixed the relevant error behind in July. Consequently, a corrected result is presented as below. **Our main claim polynomially many neurons are sufficient still holds for both architectures investigated.** We also improve this result by relaxing the assumptions. **[The main result]** Suppose $D\geq 2$. For any continuous permutation-invariant function $f: \mathbb{R}^{N \times D} \rightarrow \mathbb{R}$, there exist two continuous mappings $\phi: \mathbb{R}^{D} \rightarrow \mathbb{R}^{L}$ and $\rho: \mathcal{Z} \rightarrow \mathbb{R}$, where $\mathcal{Z} = \{ \sum_{i} \phi(x^{(i)}) : X \in \mathbb{R}^{N \times D}\}$, such that for every $X \in \mathbb{R}^{N \times D}$, $f(X) = \rho\left(\sum_{i=1}^{N} \phi(x^{(i)}) \right)$ where * For some $L\in [N(D+1),N^5D^2]$ when $\phi$ admits *linear layer + power mapping (LP)* architecture: $$ \phi(x) = \begin{bmatrix} \psi_N(w_1^\top x)^{\top} & \cdots & \psi_N(w_{K}^\top x)^{\top} \end{bmatrix}^\top $$ for some $w_1, \cdots, w_{K} \in \mathbb{R}^{D}$, and $K = L / N$. * For some $L\in [ND,N^4D^2]$ when $\phi$ admits *linear layer + exponential activation (LE)* architecture: $$ \phi(x) = \begin{bmatrix} \exp(v_1^\top x) & \cdots & \exp(v_L^\top x) \end{bmatrix}^\top $$ for some $v_1, \cdots, v_L \in \mathbb{R}^{D}$. We enumerate the differences with the previous results, and a few other remarks as follows: 1. The bounds for LP architectures are correct and kept unchanged. 2. For the LLE layer, we are able to simplify it to a linear + exponential (LE) layer. Essentially, the LE layer first transforms feature space via an exponential function, and then compounds anchor construction (Lemma 4.4) with LLE (see 4). **With this modification, the new result removes the positive input constraint.** 3. We adjust the upper bound for LE to $N^4D^2$. **Such a bound is still polynomial in feature dimension and set length, and tighter than the LP layer.** 4. The proof technique becomes unified for both LP and LE layer. Dismissing the reliance on Lemma 4.9, we derive the current result by combining Lemma 4.8 with the anchor alignment argument (Lemma 4.3). Specifically, our construction and proof outlines are listed as below: 1. We first cast an LE layer into linear + exponential + LLE layer: $\exp(V^\top x)=\exp(U^\top \log \exp(\Omega^\top x))$ where $V = \Omega U$. Note that here $\exp$ and $\log$ are entry-wise operations. 2. Let $\Omega = [W^{(1)}, W^{(2)}] \in \mathbb{R}^{D \times (D+K_1)}$, $K_1 = N(N-1)(D-1)/2+1$ follow the anchor construction as in Sec. 4.1. 3. Let $U = [\cdots, u_{i,j,p,q}, \cdots] \in \mathbb{R}^{(D+K_1) \times L}$, where $u_{i,j,p,q} = (q-1) e_i + (p - q + 1) e_{D+j}$, $i \in [D], j \in [K_1], p \in [N], q \in [p+1]$. Thus, we have $L=DK_1N(N+3)/2 \le N^4D^2$. With such construction, we can enumerates all bivariate monomials between feature channels and anchor with degree less or equal to $N$: $\exp(u_{i,j,p,q}^{\top} \log(x)) = x_i^{q-1} a_j^{p-q+1}$. 4. Injectivity can be shown by: 1) Lemma 4.8 indicates that $\sum_i \phi(x^{(i)}) = \sum_i \phi(x'^{(i)}) \Rightarrow [\exp(x_i), \exp(a_j)] \sim [\exp(x'_i), \exp(a'_j)], \forall i \in [D], j \in [K_1]$, 2) Lemma 4.3 induces $[\exp(x_i), \exp(a_j)] \sim [\exp(x'_i), \exp(a'_j)], \forall i \in [D], j \in [K_1] \Rightarrow \exp(X) \sim \exp(X') \Rightarrow X \sim X'$, where we note $\exp$ preserves the properties of anchor and permutation equivalence class. 5. Continuity is concluded by: 1) using the same argument in Sec. 4.3 to establish continuous inverse from $\sum_i \phi(x^{(i)})$ to $\exp(X)$, and then 2) composing a logarithm to inverse the exponential function. Due to the page limit, we regret that we could not include all the details in the rebuttal. However, to address this, we have diligently prepared a comprehensive revision with refined text and detailed proofs. An anonymous link is readily available for sharing these additional materials. We would sincerely appreciate it if reviewers are willing to spend time examining and re-evaluating our hard work with the corrected results. Pdf: /pdf/645ba9c53ca83915ed1e7aec3f88ab4ea3f587dc.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Estimating Noise Correlations Across Continuous Conditions With Wishart Processes
Accept (poster)
Summary: The goal of this paper is to compute the covariance of recorded neurons in a given stimulus condition with a low number of samples. Although the conditions are different, some aspects are shared which justifies the fact that the covariance for a specific condition should depend on the covariance in other conditions. This idea is implemented using a Bayesian model which imposes smoothness of the means and models the covariance as $L (U U^\top + \Lambda) L^\top$ where $L$ does not depend on the condition and $U$ and $\Lambda$ evolve smoothly with the condition. Strengths: Overall, the paper is very well-written and addresses a significant problem in neuroscience. The use of smoothness in the means and covariance estimates across conditions is new as far as I know. The authors validate their approach by experimenting with synthetic and real data. Weaknesses: While the smoothness part is new, exploiting similar conditions to yield a better covariance estimate is not. The example I have in mind is [1] where condition-specific covariances are biased towards the population covariance but since this idea is very natural, I assume more examples exist. [1] Rahim, Mehdi, Bertrand Thirion, and Gaël Varoquaux. "Population shrinkage of covariance (PoSCE) for better individual brain functional-connectivity estimation." Medical image analysis 54 (2019): 138-148. In particular, I find the experiments a bit biased toward the success of the method presented by the authors. The absence of any competitor that uses both empirical and population covariance estimates makes the comparison a bit unfair (obviously models that do not do this will not perform well by design of the generative mechanism). The other major drawback is the lack of clear metrics of performance on real data that demonstrate predictive power (e.g. a regression or classification task). The current metrics on the real data experiment rely mostly on visual inspection. As a side note, the visualization of principal components may hide some of the signal especially if the generated data is non-linear. Predictive tasks are better in that they give a clearer metric of performance. Is there a way to predict something related to the subjects (their age, their performance for instance) so that we have a proxy for the quality of the estimated covariance? A potential weakness (although I am not sure for this one), is the lack of internal cross-validation. Unless I am wrong, I believe the hyper-parameters are optimized on the same cross-validation loop as the one used for testing. If I am wrong, please clarify in the text by clearly stating that there are two nested cross-validation loops (an internal CV on the train set and an outer one to select the train and test splits). Lastly, compared to Ledoit-Wolf or others, I would assume that the model presented by the author is much slower to fit. If this is the case, this should be clearly stated (a figure with the running time in function of N would be great). There are small parts in the submission that I did not understand: - How do you optimize L? - Is $\bar{\mu}$ in equation (4) the average over samples in all conditions ? I believe this is never defined. - When you compute the log-likelihood (for instance to produce Figure 3 B) you need a model of the data. Is this model the one described by (3) ? Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: - Can you comment on the absence of a method mixing population and condition-specific covariance ? - Is there a way to design a predictive task based on your data instead of relying on visual inspection? - Can you comment on the computation time of your method ? - How is the cross-validation made? Is there an internal cross-validation to select the hyper-parameters? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 4 excellent Contribution: 3 good Limitations: N.A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **While the smoothness part is new, exploiting similar conditions [...].** Thank you for the pointer to the PoSCE paper. We had overlooked it since it is a neighboring field (none of us have experience with fMRI data). Roughly speaking, PoSCE can be adapted to our problem by estimating the covariance in condition $c$ as a weighted combination of the empirical/sample covariance in condition $c$ and the grand covariance across conditions. Our original work already compared to the two extremes – sample covariance and grand covariance were included as baselines – but it is a good idea to include a weighted combination of these two estimates as a stronger baseline. In actuality, PoSCE posits a probabilistic model over the tangent space of the PSD manifold and uses a geometric mean to perform the averaging. However, when we applied their code to our data it spits out NaNs. Given time constraints, and since conceptually simpler baselines may be preferable to a reader anyways, we decided to take a weighted linear combination of the per-condition sample covariance and the grand covariance. We feel this is equally justified as the Riemannian distance used by PoSCE, since sample covariances are already the arithmetic mean of the second moments: $\Sigma = (1/n)\sum_i \mathbf{x_i x_i^\top}$. We find that the Wishart process outperforms this modified PoSCE baseline on real and synthetic data (see “Weighted Avg” results in Rebuttal Fig F). Importantly, even if the PoSCE baseline was competitive in terms of performance, we still think our work would stand as an important contribution. In particular, PoSCE is not able to capture smoothness across conditions (as the reviewer mentions). It therefore cannot extrapolate or interpolate between conditions, which is one of our central motivations. See, e.g., the application to Fisher Information in the general rebuttal. > **The other major drawback is the lack of clear metrics [...].** Thank you for this suggestion. Inspired by this comment, we’ve performed a new analysis to demonstrate how improved covariance estimation may lead to improved decoding of the experimental condition. See Rebuttal Fig C and our general response to all reviewers for more details about this experiment. > **A potential weakness [...] is the lack of internal cross-validation [...].** Our original submission used an initial cross-validation run to show how sensitive the log-likelihood was to hyperparameters such as smoothness and number of components in Fig. 4B. We then fixed these hyperparameters and ran them over 30 additional train-test folds. Since each of these 30 folds was a fully randomized split, we think it is unlikely that we would have overfit our small number of hyperparameters. However, we agree with the reviewer that the most rigorous way to do this is to have randomized cross-validation train-validation-test folds, and to select new hyperparameters for each fold on the basis of the validation set. We have now done this on the monkey data and show our results in Rebuttal Fig F. The results are effectively unchanged, but we will swap this new analysis into the final revision because we agree that it is more rigorous. We will also clearly explain the cross-validation process in our methods – in particular, for each train-validation-test fold we do a randomized hyperparameter search over the kernel smoothness parameters and select the best model on the validation set for testing. We welcome any additional requests for details. > **Lastly, [...] the model presented by the author is much slower to fit [...].** Please check our response to R1 for the wall clock time comparisons. Although the computational complexity of the model is tractable (as mentioned in supplementary B.1.1) our model takes longer than compared methods to run. Notice that this is due to a multitude of reasons. First, instead of a single estimate of the covariance, we infer a whole distribution over means and covariances per condition which can be used to compute uncertainties associated with inferred means and covariances and also to sample from them. In addition, since our model is GP, WP based, we uncover an underlying continuous space of means and covariances which can be used for interpolating test conditions and computing properties such as curvature and gradients wrt conditions. Developing full-blown Bayesian statistical models like these usually require more computational resources as opposed to ad-hoc point-estimate or frequentist counterparts. Although our implementation is relatively fast, we recognize that there’s still a lot of room for improving the run time of the algorithm using the latest developments in the GP literature. We leave this for future work. > **How do you optimize L?** The matrix $L$ is initialized according to the Cholesky factorization of the grand empirical covariance and it’s optimized to maximize the evidence lower bound (ELBO). Note that $L$ is not a latent random variable: it is a parameter that we optimize as a point estimate. In contrast, we optimize a distribution over the latent variables of the model (e.g. covariance factors, denoted $U$ in the paper). These are jointly optimized with parameters like $L$. We regret that our description was too brief and caused confusion; we will improve the revision to make the paper more self-contained. > **Is $\bar{\mu}$ in equation (4) [...].** We apologize for this oversight. $\bar{\mu}$ is the mean function for the prior GP, in all of our experiments we set it to the constant function zero, we will include both of these missing details in the revised manuscript. > **When you compute the log-likelihood [...].** Yes. In all of our original analyses F(.) is a multivariate normal distribution as described on line 104. Since our original submission, we have also extended the model to be compatible with Poisson noise (see Rebuttal Fig E). All the questions in the **Questions** section are addressed above. --- Rebuttal Comment 1.1: Comment: Thanks for a comprehensive rebuttal. I believe this paper should be accepted. I have increased my score to 7.
Summary: This work proposes to use Wishart Processes, originally proposed by Wilson and Ghahramani, to estimate the covariance structure of neural activity in experiments where there are parametric variations linking stimulus conditions. By pooling estimates appropriately across task conditions, the limited number of experimental trials can be used to provide more reliable estimates of variability. Inference is performed via a mean field variational approach, and the model is applied to synthetic data and two neural data sets. This seems like a reasonable approach to data of the type the authors are modeling, and the experiments show that it outperforms existing methods in the literature. Conversely, this is a very straightforward application of an existing method, which may be novel to neuroscientists, but does not represent much of new conceptual contribution. Moreover, the results presented, while likely better estimates of covariance, do not produce any neuroscientific insights reported here. Strengths: - Application of Bayesian nonparametric techniques for estimating covariance to a new domain. - Pools strength across parametrically related stimulus conditions, making good use of a limited number of trials. - Clear presentation. Weaknesses: - This is a relatively straightforward application of an existing method. - Inferential details of the model are somewhat sparsely described. (See questions below.) - Results for the experiments are evaluated on various performance metrics, but it is unclear what new insights this method provides. It is perfectly reasonable to employ a better estimation method for this model, but the experiments in Figures 3 and 4 do not produce any additional findings beyond evidence that the new method works better. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - ll. 26-29: The number of parameters grows quadratically, but if all pairs of neurons are simultaneously observed, it doesn't follow that the process of estimating correlations does not scale well. Rather, the argument should be from the limited number of trials in each condition, right? - What is the difference between a Wishart process and a GP with a kernel that includes both trial and neuron indices? Some discussion of this might be helpful for those unfamiliar with the method. - Details of the variational approximation in the supplement are somewhat sparse. Do I understand correctly that the authors are performing full GP inference on $\mathbf{U}$ and $\boldsymbol{\mu}$? What exactly is the mean field form of $q_\phi$? Does it factorize over $(\mathbf{U}, \boldsymbol{\mu})$ or something else? Finally, the reference on line 7 of Supplement B is not included there, and I don't think it's the same as reference 3 in the main text. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: - This work is focused on neuroscience tasks in which trials are sampled from a set of parametrically related experimental conditions. This is true of many kinds of experiments but will not apply to many other cases that do not share that structure. - The authors discuss (in Supplement B) several methods for mitigating the poor scaling behavior of Gaussian Processes with $N$, $P$, and $C$ but do not implement any of the well-known scalable GP methods that would mitigate this, since their examples are for only moderate $N$. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > ***This is a relatively straightforward application of an existing method*** This is one of the most important points to discuss so we have laid out a detailed response in our “general rebuttal." To summarize our arguments: * “Application” papers of all stripes are requested in the “NeurIPS Call for Papers” * Our submission fits into a specific tradition of method papers in the “Neuroscience and Cognitive Science” track. * We extended the model to incorporate low-rank plus diagonal structure and we have now further extended the model to handle Poisson-distributed noise. * Even “vanilla” Wishart Processes are relatively exotic models that (to our knowledge) have not been widely applied outside of quantitative finance. > ***The number of parameters grows quadratically, but if all pairs of neurons are simultaneously observed, it doesn't follow that the process of estimating correlations does not scale well*** Our original statement&mdash;"number of estimated parameters grows quadratically... while the number of measurements grows only linearly"&mdash;is correct. E.g. Imagine a case where `N=2` neurons and `K=150` trials. Thus, we have `N * K = 300` measurements of firing rate. We want to estimate a `2 x 2` symmetric covariance, which has 3 parameters. Thus, the ratio of measurements to parameters is 100 to 1. Now imagine if we have `N=100` neurons, and we want to estimate `N * (N + 1) / 2` parameters in the covariance matrix. Intuitively, if we want the ratio of measurements to parameters to stay the same as before (i.e. 100 to 1), we would need to chose `K` to satisfy `N * K = 100 * N * (N + 1) / 2`. That is, we would need `K = 100 * 201 / 2 = 10050` trials. In summary, the difficulty of the problem depends *both* on `N` and `K`. We will edit our paper to convey this more clearly. Note that the intuition we provided above can be made fully rigorous. E.g. Vershynin (2011) “How Close is the Sample Covariance Matrix to the Actual Covariance Matrix?” > ***What is the difference between a Wishart process and a GP with a kernel that includes both trial and neuron indices?*** As a refresher, our method uses a GP to estimate the mean response and a Wishart process (WP) to estimate the covariance. Both the GP and WP have a kernel that measures similarity *across different experimental conditions*. The hyperparameters of this kernel are related to how smooth the neural response is as a function of changing the condition. In our model there is no kernel that measures similarity across neurons or trials. We consider these extensions in turn below: *GP model with kernel over neurons.* This would incorporate a smoothness prior over the neuron indices (e.g. neuron 1’s response would be highly correlated with neuron 2’s response, and de-correlated with neuron 53’s response). In most experiments, neurons are labeled arbitrarily so it typically does not make sense to add this structure. *GP model with kernel over trial.* This would incorporate a smoothness prior over trial index, which could be helpful to model slow drift or non-stationarity in the neural response. Since multiple conditions are often randomly interleaved across trials, it may be better to use the absolute time between two trials (rather than their integer index) to quantify this non-stationarity. Note that adding a time component to the GP kernel *would only model non-stationarity in the mean* neural response.* It would *not* capture correlation structure across neurons (which is our primary motivation). Thus, an interesting extension of our model would be to add a time component to *both the GP and WP kernels*. We welcome further discussion from the reviewer, in case we have misinterpreted their suggestion. > ***it is unclear what new insights this method provides*** We agree that we can do more to spell out the scientific insights our model helps enable. We would like to point the reviewer to some existing results in the paper, as well as some new experiments that we’ve performed in the rebuttal phase. *Results in the paper:* (a) smooth covariance interpolation in Fig. 3D (b) Generating covariances using a single trial per condition in Supplemental Fig. 6 (c) drawing a full posterior distribution over means and covariances attaching uncertainties to covariance estimates. *New results:* (a) assessing Fisher Information in Rebuttal Fig 1D (b) modeling covariances for Poisson distributed observations in Rebuttal Fig 1E (c) linear vs. quadratic discriminant analysis in Rebuttal Fig 1C. Finally, the introduction section of our paper cites multiple neuroscience papers where covariance plays an important quantitative role. Our model is broadly applicable, and so opens up the possibility of many new scientific insights across these cited works. We hope that the reviewer can understand it is challenging to validate a new form of analysis while simultaneously providing a scientific breakthrough. The primary focus of our paper is methodological, but we have done our best to outline scientific future directions (see our "general rebuttal"). > ***Details of the variational approximation in the supplement are somewhat sparse.*** We regret that we were not clear and understand the importance of fixing these details. By “full GP inference” we interpret the reviewer to mean the closed form solution to GP regression with Gaussian observation noise. We do not use this anywhere since the WP posterior does not admit a closed form solution. Thus, to approximate the posterior we perform joint variational inference over the GP and WP parameters (they do not factorize as pointed out by Reviewer mr49). Each parameter is modeled with a Gaussian with a learnable mean and variance, and no correlations are modeled across parameters. That is, we perform standard mean field variational inference (e.g. Blei et al. 2017, “Variational Inference: A Review for Statisticians”). We are happy to answer more queries in the discussion period. --- Rebuttal Comment 1.1: Comment: I appreciate the authors responses to my review. Some specific replies: > “Application” papers of all stripes are requested in the “NeurIPS Call for Papers” To be clear: I do not want to dismiss the authors' approach nor the work that went into it. I am not opposed to application papers. But application papers published in the Neuroscience and Cognitive science track do tend to exhibit methodological novelty even when they port over existing approaches, and/or they show how an existing approach can produce novel findings. As the authors state, the generative model and inference approach have been previously published. The authors have used a diagonal plus low rank ansatz for the Wishart, which is a reasonable, incremental advance. They also report that they have implemented a Poisson observation model, a technical advance that was not reported in the original submission. > Even outside of neuroscience, Wishart processes are a relatively exotic model. Prior applications of the method appear mostly confined to quantitative finance. I am not sure what the argument is here. If the method has not been used and provides benefits for analysis, that's great. If it doesn't provide benefits, it doesn't matter that it's exotic, right? > We hope that the reviewer can understand it is challenging to validate a new form of analysis while simultaneously providing a scientific breakthrough. The primary focus of our paper is methodological, but we have done our best to outline scientific future directions (see our "general rebuttal"). Absolutely. But as Reviewer mr49 noted, the results are somewhat underanalyzed, and also, per Reviewer ruJu, they rely mostly on visual inspection for their impact. > Thus, to approximate the posterior we perform joint variational inference over the GP and WP parameters (they do not factorize as pointed out by Reviewer mr49). Each parameter is modeled with a Gaussian with a learnable mean and variance, and no correlations are modeled across parameters. Thank you for the clarification. > Our original statement—"number of estimated parameters grows quadratically... while the number of measurements grows only linearly"—is correct. I apologize if I'm being dense, but if I have $K$ observations of an $N$-vector, I have $K$ pairs of numbers with which to estimate each unique element of the covariance matrix, correct? So the uncertainty of each entry still decreases as $1/K$? The number of parameters grows quadratically (with $N$), but so does the number of paired observations (with $K$). That is, the full covariance matrix is a sum of $N$ symmetric rank-1 matrices: $\boldsymbol{\Sigma} = \sum_{i=1}^n \mathbf{u}\mathbf{u}^\top$, so while it has $\mathcal{O}(N^2)$ parameters, it has no more information than $N$ $N$-vector observations, so one only needs $K \sim \mathcal{O}(N)$. Put another way, while the number of parameters increases with dimension, so does the size of each observation. In fact, the Vershynin paper the authors cite accords with this: For a covariance matrix in $N$ dimensions (in the authors' notation), the necessary number of observations is $K(N) \sim \mathcal{O}(N)$ for sub exponential distributions and $\mathcal{O}(N\log N)$ at worst. That is, the intuition that one needs a specific number of observations _per covariance matrix entry_ for a given level of accuracy is incorrect. The Vershynin paper shows that what is needed is a specific number of observations _per dimension_. > Scientific contribution. We agree with Reviewer mr49’s suggestion that “it would help the significance somewhat if this estimation method could help discover (or refine) a scientific conclusion.” Along similar lines, Reviewer ruJu inquired whether it is possible to “demonstrate predictive power (e.g. a regression or classification task).” I appreciate the authors' clarifications on these points. --- Reply to Comment 1.1.1: Comment: We appreciate your responses. We hope that the technical contributions (low-rank structure and now also Poisson noise model) together with the scientific analyses we highlighted in the general rebuttal have raised your interest. > ***I am not opposed to application papers.*** Thank you for this clarification. > ***But application papers published in the Neuroscience and Cognitive science track do tend to exhibit methodological novelty even when they port over existing approaches, and/or they show how an existing approach can produce novel findings.*** We think the reviewer is being fair. In our view, the level of "methodological novelty" in this track covers a pretty wide spectrum: some papers do propose fairly new models, while others have very little methodological advance. (We would rather not cite specific examples here out of courtesy.) Our paper does include technical advances (see below). But we think the real litmus test should be: **Will this paper have a measurable impact on the way people approach neural data analysis?** That is, if this paper didn't exist, would people analyze their data sub-optimally or overlook an opportunity to investigate a certain question? Currently, noise correlation analysis is only performed on very simple stimulus sets (e.g. across two oriented gratings in Rumyantsev et al. 2020). This is because people think covariance estimation isn't tractable with few trials per condition, and common methods in neuroscience (e.g. Yatsenko et al. 2015 used Ledoit-Wolf and Graphical LASSO) don't pool power across nearby conditions. We think Wishart processes are a great way to overcome this problem, and unlock the possibility of many new experiments/analyses. Our general rebuttal outlines two additional use cases (quadratic decoders and estimating Fisher information) as concrete next steps. > ***The authors have used a diagonal plus low rank ansatz for the Wishart, which is a reasonable, incremental advance. They also report that they have implemented a Poisson observation model, a technical advance that was not reported in the original submission.*** Thank you for recognizing these advances. We agree that they are incremental&mdash;perhaps so much so that we failed to sufficiently highlight the low-rank ansatz in our initial contribution. Nonetheless, these tweaks were important to get the model to work in practice, so we think they are important to be put into the scientific record. > ***I am not sure what the argument is here. If the method has not been used and provides benefits for analysis, that's great. If it doesn't provide benefits, it doesn't matter that it's exotic, right?*** Our point was that neuroscientists are unlikely to notice this model, absent our work. Put differently, it would be fair to criticize our paper for being a "straightforward application" if the method itself was well-known and had a standard implementation in e.g. scikit-learn. We agree it is more important to show the model actually provides a benefit to the field. (We think it does.) > ***the results are somewhat underanalyzed... they rely mostly on visual inspection*** Our main results are not visual, but quantitative metrics: cross-validated log-likelihoods, recovery of the ground truth covariance in the operator norm on synthetic data, and now decoding performance (Rebuttal Fig C) and Fisher information (Rebuttal Fig D). We are not sure what is meant by "underanalyzed" but we note that the quoted reviews ultimately concluded that the merits of our paper outweigh its limitations. We hope our individualized responses to those reviews addressed those concerns. > ***The Vershynin paper the authors cite accords with this: For a covariance matrix in N dimensions (in the authors' notation), the necessary number of observations is K ~ N*** We still stand by our original statement. But the upshot is that we will revise the text to be more precise (i.e. we will just cite the result from Vershynin), and we appreciate you helping us fine-tune our message. Our original statement was essentially: if $K$ stays constant and $N$ grows, your estimation of covariance can degrade. Intuitively, this is because the number of measurements equals $KN$ (i.e. $K$ observed $N$-vectors), while the number of parameters you need to estimate is $\mathcal{O}(N^2)$. That is, the number of measurements grows linearly with $N$ (for fixed $K$) while the number of parameters grows quadratically. Assuming sub-exponential variables, Vershynin says that if you take $K = c N$ (for some constant $c$) you will get good performance. In this case the number of measurements you make is $K N = c N^2$ which is the same order of magnitude as the number of parameters you need to estimate, $\mathcal{O}(N^2)$. Thus, one *does* need roughly the same number of measurements per covariance matrix entry. Perhaps our original statement was confusing and the reviewer thought we intended to imply that you'd need $K = N^2$ trials?
Summary: The estimation of noise covariance in neuroscience is limited by the experimental difficulty of obtaining large numbers of trials for the same neurons, but also the desirability of large numbers of neurons. Here the authors begin by recognizing this need for lower-variance estimators of the noise covariance, and especially those that leverage correct assumptions specific to this domain. This submission centers around leveraging the fact that noise covariance is often estimated not only by repeating a single stimulus many times, but also by showing very similar stimuli parameterized by a continuous number (such as grating orientation), which should evoke similar responses. To leverage this assumption, the authors build a probabilistic model of neural responses in which assumes that the response statistics vary smoothly with a known parameter x. Specifically, transformed neural responses are modeled as a Gaussian distribution whose mean and factorized covariance ($U$) are Gaussian Processes of x (with a squared exponential kernel). The parameters of this probabilistic model, including the covariance as a function of x, are inferred via mean-field variational inference. The estimation method is validated on simulated data and neural recordings in mouse and macaque. Strengths: This well-written paper tackles a practical problem already faced in many experimental designs in neuroscience, and will likely find wide use if the code is well-documented. The solution is elegant, offers a nice balance of simplicity and power, and benefits from its specific tailoring of known statistical methods to a particular statistical problem in the domain of neuroscience. Weaknesses: Although I see the benefits of model simplicity, one wonders what this model might miss about the neural response. It is worth noting that the estimated covariance is, strictly speaking, the estimated covariance of the model's residual error of its mean estimate. If this mean is poorly estimated, the noise covariance will be larger than actuality, if not wholly inaccurate. The accuracy of the mean response, in turn, depends on the assumptions built into the model. How much is it accurate to say that neural responses, in general, have a mean firing rate that is a GP of the stimulus parameter with a squared-exponential kernel? The utility of analyzing the resulting $\Sigma(x)$ depends on how much one trusts that this implied encoding model is a good one. With this said, it is at least more transparent here that analyzing summary statistics like noise correlations always implies, under the hood, a statistical model of neural responses. It is nice here that this is explicit, and I think a step in the right direction. Finally, it would help the significance somewhat if this estimation method could help discover (or refine) a scientific conclusion in, say, the Allen dataset. What more can we learn from existing data with a lower-variance covariance estimate? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: I have a request for some analyses that would show how well this model captures neural responses. - First, for the presented neural data it would be nice to see well this model compares in log likelihood to a probabilistic baseline that is more expressive than a GP mean. The ELBO of a VAE comes to mind, but there are many options in a good-faith effort to build powerful (if uninterpretable) probabilistic models. - It would also be nice to see an analysis of synthetic data where some of the assumptions are broken. For example, what happens on data in which the smoothness parameter is not uniform but is itself a function of the inputs? This seems like a near-certain outcome in real neural responses. (For example, in efficient codes of natural images, certain small changes in inputs result in larger changes in neural activity. As per the oblique effect, one might expect that neural responses vary more quickly w/r/t orientation for the more-frequent cardinal orientations as compared to less-frequent diagonal orientations). A discussion of these figures could help ensure this estimator is used where appropriate. For clarity, could the graphical model of this generative model and variational inference procedure be presented as a figure? Not much was said about the intricacies of learning and inference. Figures in this direction would likely help adopters troubleshoot the problems usually found in ELBO-trained methods. E.g. what is the tradeoff of accuracy vs speed in the number of MC samples used to estimate of the gradient of the loss? - Related: In Fig 3B, is this the max, min, and mean (across seeds) of the median LL across 30 folds? If so, that would narrow these distributions considerably relative to just the distribution of LL across folds on a single run. This would somewhat obscure the variance of the estimator. A thorough analysis of the variance of this estimator would be interesting, help instill confidence in its use, and align with the paper's main narrative (as it is likely smaller than naïve covariance estimation). Might the kernel be learned via a more expressive function? It strikes me that contrastive learning methods learn such kernels (and are often talked about as learned distance metrics) and could be readily slotted into this framework without (in my opinion) much loss in interpretability. A transfer-learning/ foundation model approach would mitigate the large sample size requirement of a large model like this. There was a sentence about this in the Discussion, so count this as a vote of excitement in that future direction. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Some limitations were discussed. Restating some things above, it’d be nice to see more acknowledgements of the limitations relating to the implied GP encoding model and the potential shortcomings of the nonconvex, ELBO-based inference method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Although I see the benefits of model simplicity [...].** We agree that accurately inferring the mean is an important aspect of accurately inferring the covariance. Using a GP to enforce some smoothness in the mean response across conditions is indeed reasonable. See, for example: Wu et al. ("Gaussian process based nonlinear latent structure discovery in multivariate spike train data.", NeurIPS 2017). The details of the model will likely depend on the particular neural dataset under study. It is certainly possible that other kernels are better suited to certain situations. Our code package will of course have the standard kernels implemented so that users can try different forms of the model. While we think this is a reasonable first pass, it is easy to swap in new kernel functions, methods for kernel learning, or even methods for non-stationary Gaussian process inference as discussed below. We are excited to try these new directions in future work. > **[...] it is at least more transparent here [...].** Thank you. We agree with these sentiments. > **[...] discover (or refine) a scientific conclusion [...].** Thank you for this comment, which motivated us to flesh out how WPs can provide estimates of Fisher Information that are (a) more accurate, and (b) come with Bayesian uncertainty estimates. Fisher Information is a fundamental quantity in theoretical neuroscience that is used to characterize thresholds of perception (see Moreno-Bote et al., 2014, Nature Neuroscience). Please see the general rebuttal and Rebuttal Fig D for more information. > **[...] probabilistic baselines.** Thank you for the interesting suggestion. In our initial discussions, we were not able to find a straightforward way of performing this experiment. Popular VAEs consider a factorization in the latent space which is independent across different dimensions, estimating a diagonal noise covariance in the latent space as opposed to the high-dimensional firing rate space. Deep VAEs are data-hungry (require many trials) while the applications we are interested in are very trial-limited. We also feel that doing a comprehensive comparison to VAEs is outside the scope of this paper since VAEs represent a very broad class of models with different objective functions (e.g. beta-VAE), architectures, and hyperparameters. Please let us know if you have a concrete model in mind that could be implemented in the time frame of the review process. We are happy to try. > **[...] some of the assumptions are broken [...].** We thank the reviewer for the insightful suggestion. Indeed model misspecification is likely to occur for any model of neural data. We would like to re-emphasize that a GP or WP prior does not necessarily mean that the neighboring conditions are equi-distanced in the mean or covariance space. The prior encourages the nearby conditions to have close means and covariances, once the model is conditioned on the observed data, it allows for the distances to be influenced by the empirical values. In technical terms, GPs with stationary kernels are asymptotically consistent (see the introduction of Koepernik et al. "Consistency of Gaussian process regression in metric spaces." for a review). Regarding the suggestion, again our initial discussion was not conclusive about how to perform this experiment. GPs with stationary kernels do not allow for a varying smoothness parameter. One way to incorporate the reviewer’s suggestion would be to use non-stationary kernels, but choosing a non-stationary kernel that’s consistent with neuroscientific observations is not straightforward. We believe our work opens up the possibility of adapting non-stationary Gaussian process methods to this problem, and we now discuss this as a future direction in our revision. See Paun et al. "Stochastic variational inference for scalable non-stationary Gaussian process regression." for more information. > **For clarity, could the graphical model [...]?** Please see Rebuttal Fig 1G,H. We will update the manuscript with this information. > **[...] intricacies of learning and inference [...].** In our code package, all the parameters for every run of the algorithm are stored in configuration (yaml) files allowing for inspecting and modifying the parameters. An example config file is included in the uploaded code package attached to the supplementary. Fortunately, we have not observed sensitivities to the optimization parameters. We’ve used Adam optimizer with a learning rate of 0.001 with a single MC sample and 10000 iterations for all experiments. In our experiments, using multiple MC samples did not lead to qualitatively different outcomes. As expected, the run time is moderately slower with multiple MC samples; overall run time is still on the order of a few minutes, so this is not a major concern. The number of MC samples is a tunable parameter in our code package. > **Related: In Fig 3B, is this the max, min [...].** We are not 100% certain what the reviewer is requesting. We reported the median log-likelihood of individual observations in the test set, which is similar to the mean. We plan to substitute the mean log-likelihood in our revision since it produced essentially identical results (see Rebuttal Fig E,F). The variance of the LL across individual samples is not related to the variance of the estimator. The variance of an estimator is a frequentist notion of uncertainty that does not neatly map onto the Bayesian approach that we’ve adopted here. We don’t think that the variance of LL across individual samples is usually of interest to report, but perhaps we are missing something or misinterpreting the reviewer’s request. We are happy to continue the discussion. > **[...] more expressive function [...].** We thank the reviewer for sharing our excitement. Indeed this is a direction that we are considering and excited about for future work. --- Rebuttal Comment 1.1: Title: Thanks for the changes Comment: Hi, I believe my concerns are fully addressed. On reconsideration, I don't believe model misspecification is as scary of a threat as I first considered it. In this application domain (of smoothly parameterized stimuli) a GP assumption is a good start. This also also relates to my request for expressive baselines, which I agree won't be so feasible in these stimulus-limited domains. Such concerns are likely more relevant for (future) applications of these ideas to naturalistic stimuli and learned kernels. In the revision I especially appreciate the new application to Fisher Information. This will certainly be of interest to those at the intersection of psychophysics and neural physiology. In light of these changes I will raise my score by 1 to 7.
Summary: - Exhibits good performance, especially for recordings of large neural populations with very few trials per condition. - Enables dense sampling of conditions with only a single trial per condition, unlike standard estimators that require a large number of trials per condition (although based on empirical assumptions). Strengths: - Exhibits good performance, especially for recordings of large neural populations with very few trials per condition. - Enables dense sampling of conditions with only a single trial per condition, unlike standard estimators that require a large number of trials per condition (albeit grand empirical estimator). Weaknesses: - Computationally more expensive than standard covariance estimators Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Figure 2: What does the "Wishart oracle" mentioned in the legend represent? Although it is mentioned, it does not appear to be shown in any panel. Additionally, why is "Empirical" only plotted in panels D3 and D4? It is assumed that its log probability in panel C diverges, but what about D1 and D2? Furthermore, why does the mean operator norm not align with the log probability for C3 and D3? In D4, one would expect the gap between "Grand Empirical," assuming infinite smoothness, and Wishart to consistently widen with lower smoothness. - Figure 4: How is it possible for condition 1 to be present in both the training conditions (panel C) and the test conditions (panel D)? - What is the computational runtime compared to standard covariance estimators, particularly in comparison to the "Grand Empirical" estimator that performs well on the real experimental data shown in Figures 3 and 4? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have discussed limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Computationally more expensive than standard covariance estimators** We have clarified that the computational burden is reasonable: it takes 80 seconds to fit a dataset with 100 neurons, 80 conditions, 32 trials per condition. Notice that we run 10,000 iterations of our optimization algorithm, therefore the run time for such a dataset is about 8 milliseconds per iteration. The runtime for other algorithms for a similarly sized dataset are the following (in seconds): PoSCE: 45, Graphical Lasso: 24, Ledoit-Wolf: 0.3, and Empirical: 0.3. Experimental data in neuroscience is extremely expensive and time-consuming to collect. Equipment and facilities alone cost tens of thousands of dollars, and animals may need to be trained for weeks (if not months) to perform certain tasks. It is also ethically imperative to use as few animal subjects as possible (particularly for non-human primates). In short, the experimental datasets we target are extremely valuable: neuroscientists are highly motivated to perform high-quality analyses, even if they require marginally more computational resources. We also wish to remind the reviewer that our model has fundamentally new capabilities that are not present in simple baselines. In particular, we can infer a continuous manifold for the mean and covariance of neural responses (see Fisher Information analysis in Rebuttal Fig 1D), generalize to entirely unseen conditions (see Fig. 3D), and quantify our uncertainty in a Bayesian framework. A fair comparison between inference times should take this difference into account. > **Figure 2: What does the "Wishart oracle" mentioned in the legend represent? Although it is mentioned, it does not appear to be shown in any panel.** Thank you for noticing. This corresponds to an experiment that we removed before submission but forgot to remove from the legend. Briefly, this corresponded to a Wishart process but with parameters initialized to the ground truth. Our intention was for this to be a sanity check – an “upper bound” on our model’s performance. However, we intend to remove it in the final version for a streamlined presentation. > **Additionally, why is "Empirical" only plotted in panels D3 and D4? It is assumed that its log probability in panel C diverges, but what about D1 and D2?** This has been an oversight, we apologize for this. We have included the revised figures in Rebuttal Fig 1A,B. However, empirical covariance matrices were ill-conditioned and did not produce proper log-likelihood estimates to be included in Fig. 2 C1-4. > **Furthermore, why does the mean operator norm not align with the log probability for C3 and D3?** They are measuring performance in two different ways, so it is possible for a model to outperform another in log probability but vice versa on the covariance operator norm. Generally, these two measures of performance are qualitatively aligned (except for the portions of C3 and D3) that the reviewer notes. We will add a short explanation to section 3.1 in the revision. > **In D4, one would expect the gap between "Grand Empirical," assuming infinite smoothness, and Wishart to consistently widen with lower smoothness.** We agree. This is generally what we observe in D4. The reviewer may be noting there is an exception at very low degrees of smoothness. Here, all models perform badly since there is no good way to share information across conditions and there are too few trials per condition. It is curious that Grand Empirical “beats” the WP here, which may be due to the variational inference getting caught in sub-optimal local minima. Ultimately, we think this edge case is not a regime of primary interest, since no model performs well (i.e. covariance estimation is not tractable). > **Figure 4: How is it possible for condition 1 to be present in both the training conditions (panel C) and the test conditions (panel D)?** There are two separate analyses. In panel C, we train on all conditions and test on a subset of trials per condition. In panel D, we train on a subset of conditions and test on held-out conditions. We show condition 1 in each panel since it was held out in the second analysis. We will update the legend titles and figure legend to make this more clear. > **What is the computational runtime compared to standard covariance estimators, particularly in comparison to the "Grand Empirical" estimator that performs well on the real experimental data shown in Figures 3 and 4?** Please see our response above. In short, we believe the computational burden is negligible. Our revision will include a table showing the computational scaling of our algorithm as a function of the number of neurons, conditions, and trials per condition. Roughly speaking, we expect computation times to be on the order of minutes for large-scale neural data. --- Rebuttal Comment 1.1: Comment: Thank you for providing a thorough rebuttal. I retain my already positive score.
Rebuttal 1: Rebuttal: # General Response We thank the reviewers for their thoughtful and productive critiques which did not identify any major technical errors. We have done our best to incorporate all reviewer feedback and requests for more details (see individual responses to each reviewer). Below we summarize the two most important considerations that we hope the reviewers and AC take into account during their final deliberation. **Technical contribution.** The reviewers largely agreed that our paper addresses an important open problem, is well-written, and is germane to the NeurIPS audience. One potential exception is Reviewer yodS, who writes “this is a very straightforward application of an existing method, which may be novel to neuroscientists, but does not represent much of new conceptual contribution.” We would like to highlight the following: - The NeurIPS call for papers invites papers on “Applications.” It is especially common for statistical neuroscientists to publish methodological papers in the “Neuroscience and Cognitive Science” conference track. By adapting an existing method to neural data, we believe our paper fits squarely within this tradition. - Even outside of neuroscience, Wishart processes are a relatively exotic model. Prior applications of the method appear mostly confined to quantitative finance. - We were not able to identify an implementation of this model that could be used off-the-shelf on our data. Indeed, the Wilson and Ghahramani paper cited by the reviewer used a different inference method (MCMC). We found it necessary to use a more recent variational inference procedure that appeared at NeurIPS in 2019 by Heaukulani & van der Wilk. We found it necessary to implement their model from scratch. A deliverable outcome of our paper will be a self-contained code package that is customized to handle neural data (see supplement for a lightweight version of this package). - Most importantly, we found that an extension of the model with low-rank-plus-diagonal structure outperformed the full-rank Wishart process. We have not seen such an extension in prior work, even though it has multiple benefits: the learnable diagonal term ensures that matrices are well-conditioned during inference, has fewer parameters (discourages overfitting), and is a reasonable prior for a “spiked covariance” model. If the reviewers agree this is novel, we will revise the text of our paper to emphasize this technical innovation. - Since our original submission, we have also obtained good results using a Wishart process with Poisson-distributed observations (see Rebuttal Fig E). This is an important consideration for neural data, as evidenced by many prior submissions to the neuroscience track at NeurIPS (e.g. Zhou & Wei 2020 “pi-VAE”). Again, we are unaware of any paper that describes this variation of a Wishart process. In summary, we respectfully disagree with the notion that our work is merely a “straightforward application.” It involved implementing the model from scratch, adapting the model (e.g. with low-rank-plus-diagonal structure), and benchmarking the model against multiple baselines and across two datasets spanning different species and experimental paradigms. Publishing this effort is necessary to spark an interest in this class of under-appreciated models. Indeed, every colleague in computational neuroscience that we’ve asked is unaware of Wishart process models. **Scientific contribution.** We agree with Reviewer mr49’s suggestion that “it would help the significance somewhat if this estimation method could help discover (or refine) a scientific conclusion.” Along similar lines, Reviewer ruJu inquired whether it is possible to “demonstrate predictive power (e.g. a regression or classification task).” We have thus far pursued two scientific applications with promising results. First, by inferring condition-specific covariance matrices, our method enables decoding conditions by quadratic discriminant analysis (QDA). Linear discriminant analysis (LDA) is a more common model in neuroscience, but quadratic decoders are biologically plausible and of interest to the field (Pagan et al., 2018, Neural Computation). In Rebuttal Fig C, we show that the QDA models enabled by our method outperform linear decoders. This directly answers ruJu’s query. Second, an even more exciting direction is to use Wishart processes to quantify information-limiting noise correlations (Moreno-Bote et al. 2014, Nature Neurosci). Our method enables continuous interpolation of the mean and covariance across conditions which is necessary to compute the Fisher information. Furthermore, when a Gaussian process is differentiable, its derivative is also a Gaussian process. Thus, we can compute a posterior to quantify uncertainty in our estimate of Fisher information. We showcase our method in Rebuttal Fig D. Together, the two results above provide a more concrete glimpse into how our methods can be used for scientific discovery. We hope the reviewers are sympathetic to the challenges of providing a complete scientific story in the same paper as an important methodological advance. **Gameplan.** We will update Figure 2 with the missing lines pointed out by reviewer sHno (see Rebuttal Fig A-B). We will add Rebuttal Figs C and D to Figure 2 of the paper to demonstrate scientific applications and significance. We will describe the Poisson model in a supplemental section (see Rebuttal Fig E). We will swap Rebuttal Fig F in for Figure 3B to address a comment by reviewer ruJu. We will add the graphical model schematic (Rebuttal Fig G) to Figure 1. We will edit the text of the paper as described in the individual reviewer responses. We hope that the reviewers and AC will agree that these edits will substantially improve the manuscript without fundamentally changing the main story and results. Pdf: /pdf/1faf4e45c3164918178a51cc1f1d77c95838bc80.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
The Quantization Model of Neural Scaling
Accept (poster)
Summary: This paper proposes a possible mechanism that explains both the phenomenon where the cross-entropy loss of large language models (LLMs) decreases as a power law with respect to the training corpus size, and the phenomenon in which certain capabilities of LLMs emerge spontaneously as the loss becomes low enough. The authors empirically demonstrate that in a toy problem called "multitask sparse parity", in which their assumptions explicitly hold, an MLP network indeed obeys both the scaling laws and the emergence phenomena. Finally, the authors propose an empirical method to auto-discover skills in LLMs and apply this method to provide empirical evidence that their assumptions also hold in LLMs. Strengths: 1. The paper proposes a novel and timely explanation for both the scaling laws phenomenon and the emergence phenomenon in large language models (LLMs). 2. The authors support their explanation with a clear demonstration on a toy task of multitask sparse parity. Essentially, they trained a simple MLP network on multiple sparse parity tasks and demonstrate both the scaling laws phenomenon and the emergence phenomenon when the distribution of the different tasks follows a power law. 3. In addition, the authors demonstrate the relevance of their explanation for LLMs by proposing a novel method for auto-discovering skills in LLMs and show that the discovered skills obey power law frequencies. Using and scaling this method might be of independent interest for both the mechanistic interpretability community and for designing better pre-training datasets. 4. The paper is clearly written and accessible to readers with varying backgrounds. Weaknesses: There is insufficient evidence to conclusively prove that the Quantization Model accurately depicts the training of large language models. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Could you provide a practical demonstration of the Quantization Hypothesis in a controlled environment that is neither overly simplistic nor complex, while ensuring a clearer definition of natural language quantas? For instance, perhaps you could utilize a pre-trained language model like GPT-4 to create datasets with well defined quantas in a similar manner to the TinyStories dataset [1]. [1] Ronen Eldan, Yuanzhi Li (2023). TinyStories: How Small Can Language Models Be and Still Speak Coherent English? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors were very honest in acknowledging less plausible alternative explanations for their empirical findings. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback and question/suggestion! We agree that more work could be done in evaluating to what extent scaling on real-world datasets satisfies the Quantization Hypothesis. **Could you provide a practical demonstration of the Quantization Hypothesis in a controlled environment that is neither overly simplistic nor complex, while ensuring a clearer definition of natural language quantas? For instance, perhaps you could utilize a pre-trained language model like GPT-4 to create datasets with well defined quantas in a similar manner to the TinyStories dataset [1].** This is a cool idea! Based on this suggestion we tried running QDG on the TinyStories-33M model. We get some interesting clusters, although the most interesting TinyStories quanta seem less interesting than the most interesting quanta on The Pile. Some clusters in TinyStories include: - Predicting a question mark at the end of some dialogue that asks a question - Predicting the correct pronoun based on the name of the subject - Predicting quotation marks after ‘said, ‘ - Predicting a comma after “Suddenly” or “Just then” - Predicting a newline after a dedicated space token Many of the clusters we get seem to reflect very simple repeated patterns in the TinyStories dataset, rather than reflecting sophisticated reasoning. We’ll add examples of some of these clusters to the appendix. --- Rebuttal Comment 1.1: Title: Thank you Comment: Thank you for the thorough rebuttal and additional experiment you performed. After reading the other reviews, I have decided to keep my score unchanged.
Summary: This paper proposes a hypothesis that there exists a universal and crucial discrete set of computations (quanta) for reducing loss in various prediction problems, and the model's performance is determined by successfully learning these computations. Through this hypothesis, it demonstrates the relationship between power law neural scaling and the effective learning of more quanta, which cannot be solved solely by the memorization ability of the existing model, particularly in complex problems. Additionally, the paper proposes "Quanta Discovery from Gradients" as a method to auto-discover quanta. Strengths: 1. Based on the assumption that there exists a discrete set of computations, known as "quanta," which determines the performance of the model, the paper provides hints for understanding the emergent behavior observed in the scaling law of the model. 2. The objective of the study is to interpret power law scaling and emergent behavior phenomena by using a multitask sparse parity dataset that cannot be solved solely by the model's memorization ability. 3. Through experiments conducted on the multitask sparse parity dataset, the paper shows that when the frequency of using quanta follows a power law distribution, power law neural scaling can occur as the model learns more quanta. Weaknesses: 1. The paper lacks a clear definition of quanta, which could benefit from further elaboration and clarification. 2. Insufficient explanation is provided regarding the criteria used to determine quanta through the Quanta Discovery from Gradients (QDG) method. 3. Extending the experiments on the Quantization Hypotheses, using the proposed toy dataset, to Language Models (LLM) is hindered by the lack of clarity regarding the relationship between LLM tokens and quanta. 4. Section 5 claims that "Clustering based on inputs or outputs therefore seems unlikely to discover quanta in language modeling," but lacks substantial empirical evidence to support this statement. 5. The explanation concerning the relationship between quanta and gradients, which forms the basis for determining quanta in the QDG method, needs to be further developed and elaborated upon. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. In Figure 1, what decoding method was used for the auto-discovered quanta? 2. The experiments on the toy dataset appear to be influenced by the decoding algorithm (such as greedy, beam search, random). Is there any consideration given to this aspect? 3. In Figure 4, what is the relationship between the highlighted red section in the prompt and each graph? 4. In Section 5, it would be beneficial to include details about the dimensional of the gradients used in the Quanta Discovery from Gradients (QDG) method. The absence of explicit notation indicating the shape of each symbol for explaining QDG makes it challenging to ascertain the computational complexity involved in implementing QDG. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: There is a need to demonstrate the effectiveness of the proposed Quanta Discovery from Gradients (QDG) method in identifying quanta in various Language Models (LLM) and datasets. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed feedback and questions, and for highlighting the importance of clarifying our definitions in the paper! We think the clarifications we’ll make, prompted by your review, will significantly improve the paper. We’ll respond point by point: ### Weaknesses: **1. The paper lacks a clear definition of quanta, which could benefit from further elaboration and clarification.** Here is the definition of quanta which we’ll add to the paper: _Definition of quantum (plural quanta)_: An indivisible computational module that, for example, retrieves a fact or implements an algorithm. Although quanta cannot be learned instantaneously in practice, we expect their formation to correspond with a quick drop in loss akin to an upside-down sigmoid. This was observed for the formation of “induction heads” in Olsson et al. (2022), “In-context Learning and Induction Heads”. We interpret the induction circuit described in that paper as one such quantum, and propose that LLMs can be understood as an ensemble of such modules. **2. Insufficient explanation is provided regarding the criteria used to determine quanta through the Quanta Discovery from Gradients (QDG) method.** Thanks again for pushing us toward a clearer presentation! _QDG_: Operationally, we identify quanta as clusters of next-token prediction samples (found by spectral clustering of gradients) that pass subsequent vetting for task coherence. While QDG does not on its own produce a mechanistic understanding of how the quanta work, it does suggest the function that these modules have. For instance, in one QDG cluster, all samples involve predicting a number to continue a numerical sequence, and we think that this suggests that there is a module in the network that is responsible for this capability/behavior. **3. Extending the experiments on the Quantization Hypotheses, using the proposed toy dataset, to Language Models (LLM) is hindered by the lack of clarity regarding the relationship between LLM tokens and quanta.** The quantity being measured in LLM scaling laws is the mean cross-entropy loss for next-token prediction in the training distribution of text. In our model, a quantum in an LLM is a computational module which improves the network’s ability to do next-token prediction on some fraction of tokens in the training distribution of text. To recover LLM scaling laws, we claim that these fractions (frequencies at which the quanta are used) follow a power law. **4. Section 5 claims that "Clustering based on inputs or outputs therefore seems unlikely to discover quanta in language modeling," but lacks substantial empirical evidence to support this statement.** When we tried to cluster next-token prediction samples just based on what the next token was, or based on the last tokens of the input, we didn’t get interesting clusters. This is a minor point, but admittedly not well supported by any experiments we showed in the paper. We’ll either remove this claim or support it with experiments. **5. The explanation concerning the relationship between quanta and gradients, ..., needs to be further developed and elaborated upon.** When we clustered samples according to gradients, we intended that this would cluster them according to whether the model used similar knowledge or circuitry to perform prediction on those samples. Consider some quantum (module) in the network. If it is important for prediction on a pair of samples, then the subset of network parameters that are part of the module will perhaps have nonzero and similar gradients for those samples. If on a different sample that module is irrelevant to prediction, then the module’s parameters might have gradients of close to zero (or at least different gradients from those samples which relied on the module). Therefore gradients on samples which rely on the same computational module will have higher cosine similarity than between samples which don’t rely on the same module. Thank you for pointing out that we didn’t justify this at all in the current version of the paper! We’ll add some text like this motivating the use of gradients to identify quanta in Section 5 when we introduce QDG. ### Questions **1. In Figure 1, what decoding method was used for the auto-discovered quanta?** With quanta, we’re interested in the capabilities present in a single forward pass of a model. The samples shown in Figure 1 are next-token prediction samples, where the token to be predicted is highlighted in red. We only applied clustering to samples (where a sample involves predicting a single token from its context) if the model’s cross-entropy loss on them was low (we chose a threshold of 0.1 nats). These are samples where the model would correctly predict the next token if greedy decoding was used. **2. The experiments on the toy dataset appear to be influenced by the decoding algorithm (such as greedy, beam search, random). Is there any consideration given to this aspect?** The multitask sparse parity task is a supervised binary classification problem which we train MLPs to solve, so we don’t think that decoding is relevant. Categorical cross-entropy loss seems like a natural metric of model performance on this task. **3. In Figure 4, what is the relationship between the highlighted red section in the prompt and each graph?** In Figure 1 and Figure 4, the highlighted red text indicates the token that the LM was predicting from the text shown before it. So in Figure 4, we show how cross-entropy loss changes with model scale in predicting the highlighted red token from its context. These examples are from the test set of The Pile corpus. **4. …The absence of explicit notation indicating the shape of each symbol for explaining QDG makes it challenging to ascertain the computational complexity involved in implementing QDG.** Good point, sorry for the omission! We’ll add details about the dimensionality of the model and the complexity of running QDG to Section 5. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed answers and results. Some of my concerns have been addressed, but i still my concern there is insufficient evidence to support the hypotheses about the proposed quantization model showing consistent results in large-scale language models. However, I would like to raise the score considering the potential demonstrated by the Quantization Model to understand "neural scaling" and the accompanying experiments that provide support for this.
Summary: This paper proposes a new way of understanding scaling laws. Namely, it shows that capabilities can be thought of as being broken into discrete units (quanta) which themselves follow a power law and have an inherent ordering--called the Q Sequence. This combined with the fact that 1) an increasing subset of quanta are learned over various scales and 2) model and data sizes can be related to the number of quanta gives rise to the scaling laws shown in previous works. The paper also proposes a method to cluster examples by unit-normalized gradient, and empirically confirms within margin of error that theory agrees with observed data. The quantization model explains a sudden emergence of new capabilities with scale: certain quanta must be learned before the model can carry out certain tasks. Strengths: Simple, effective model to understand scaling laws Weaknesses: While the paper shows that the model scaling trend agrees with theory on real data, it does not empirically validate whether the same alpha of 0.083 matches the data scaling theory. Also, it would be interesting to show this holds on non LLM tasks like image classification, since the (titular) claim is "neural scaling". Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Everything is clear. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: Not that I am aware of Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We’re glad you found the paper compelling! **While the paper shows that the model scaling trend agrees with theory on real data, it does not empirically validate whether the same alpha of 0.083 matches the data scaling theory. Also, it would be interesting to show this holds on non LLM tasks like image classification, since the (titular) claim is "neural scaling".** It would indeed be great to evaluate whether the Quantization Model holds on other types of data, e.g. for vision tasks like you suggest. Perhaps this could be done in future work. For data scaling, Figure 11 and Figure 17 of the Appendix (Supplementary Material) may partially address this. For the Pythia LLM scaling suite, all models are trained on the same amount of data, so we can’t study multi-epoch data scaling. However we do have intermediate training checkpoints for these models, so we can study scaling in training steps. In Figure 17 we see that the training curve is not obviously a clean power law, so it is not clear where to fit a line to the curve. For the lines we do fit, we get a slope which is less negative than -0.083, which is what we’d expect from our theory, although the measured slopes of between -0.037 and -0.06 are shallower than we’d expect from theory: -0.083 / (1 + 0.083) = -0.077. This could be due to a suboptimal choice of hyperparameters when EleutherAI trained these models. Figure 17 shows empirical scaling exponents in parameters and data from a variety of scaling papers. The most reliable point is probably the green point from the Chinchilla paper (Hoffmann et al.), which lies slightly above our curve $\alpha_D = \alpha_N / (\alpha_N + 1)$.
Summary: The authors investigate neural scaling laws and propose an explanation for the power law scaling that is observed in addition to the emergence of new behaviours. Strengths: The ideas are interesting and I found some of the experiments such as per token losses on the language models to be quite interesting. Weaknesses: I found the presentation of Section 5 to be a bit confusing. I also found the connection to emergent behaviours to be a bit speculative. Specifically I think the idea of looking at per token losses/etc to be quite interesting, but I wouldn't necessarily say that I am convinced that this is emergence in the same way I'd think of emergence on some BigBench tasks, for example. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Could you elaborate on why you feel that this demonstrates emergent capabilities in language models? You mention the spectral clustering hyperparameters: how sensitive are other parts of this work to broad choices of hyperparameters? For example how are you setting batch size / LR for the LM experiments? In Figure 5 right panel there is some interesting shape in the yellow curves-- I'm curious what you think of those? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, the limitations are mostly well addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback and questions! We’re glad you found our ideas and some of our experiments interesting. Before responding to your questions point by point, we’d like to apologize for not making our most interesting contribution more clear, namely our notion of quantization and of quanta. Essentially we argue that large neural networks are implicitly ensembles of indivisible computational modules (quanta). We view our work as a step toward validating Marvin Minsky’s Society of Mind theory, that intelligent systems can be thought of as collections of many smaller systems performing specialized jobs. Intriguingly, we were able to reconcile this view of neural networks with scaling laws while also providing a framework for understanding emergence, although understanding emergence is a less important point of the paper. We will edit the paper to better emphasize our notion of quantization and its relation to Minsky. **I found the presentation of Section 5 to be a bit confusing.** We’re sorry about this! Section 5 could definitely be better motivated and explained. The first paragraph in particular jumps into some confusing discussion about partitioning the problem of language modeling into subtasks without motivating that. We will edit the paper to improve the clarity of this section. In Section 5, we were ultimately interested in trying to discover what some of the computational modules (quanta) in LLMs are. **I also found the connection to emergent behaviours to be a bit speculative. Specifically I think the idea of looking at per token losses/etc to be quite interesting, but I wouldn't necessarily say that I am convinced that this is emergence in the same way I'd think of emergence on some BigBench tasks, for example.** One advantage of looking at per-token loss is that it avoids some of the critiques of emergent abilities brought up by Schaeffer et al. (2023) “Are Emergent Abilities of Large Language Models a Mirage?” -- some examples of emergence in BigBench may be due to the choice of metric. In contrast, the loss on a single token is a smooth function of the model’s output, so if we see a sharp drop in loss, this suggests that there is some genuine qualitative difference in the model. **Could you elaborate on why you feel that this demonstrates emergent capabilities in language models?** One of the core questions we’re interested in with this paper is: how does scaling change what neural networks learn? Our hypothesis is that scaling adds new computational modules (quanta) to the network that weren’t present before. One prominent prior model of neural scaling is Sharma and Kaplan (2022) “Scaling Laws from the Data Manifold Dimension”. In this model, neural networks are understood as approximating a function defined on some “data manifold”, and the effect of scaling is to smoothly increase the resolution/precision at which this function is approximated. This can explain power law scaling, but there seems to be some tension between this view, where scaling leads to a smooth change in what models learn, and the sharp improvements with scale we see in network performance on some tasks (e.g. BigBench tasks with high “breakthroughness”). Now, one way of resolving this tension is to argue that emergence is an artifact of one’s choice of metric like Schaeffer et al. (2023) do. But we propose an alternative view with our model, where smooth power law scaling in mean loss averages over many discrete changes as indivisible modules (quanta) are added to the model. In the multitask sparse parity dataset we construct, this model of scaling holds -- emergence is real: performance on subsets of the data sharply transitions from random-guess performance to perfect performance with increasing scale. But we still get smooth power laws in mean loss. So a very strong notion of emergence can occur for data with the right structure. If this story described language modeling, then we could understand emergent abilities in language models as the result of new quanta being learned. If a benchmark exhibits emergence, we’d say that the quanta relevant to solving that task are learned at a similar scale. At that scale, models transition to good performance on the task since the necessary quanta are now present, and weren’t present at smaller scale. **You mention the spectral clustering hyperparameters: how sensitive are other parts of this work to broad choices of hyperparameters? For example how are you setting batch size / LR for the LM experiments?** For the LM experiments, we didn’t train any LMs ourselves but instead just used the open source Pythia suite from EleutherAI. For the multitask sparse parity experiments, we just used a large batch size and an AdamW learning rate of 1e-3 that worked well in practice. We’d be happy to run some grid searches over batch size and LR and include these results in the appendix! **In Figure 5 right panel there is some interesting shape in the yellow curves-- I'm curious what you think of those?** We’re not quite sure what’s causing the deviations from a clean power law that we see in the rank-frequency curves in Figure 5. In Appendix D.2 we experimented with a toy model of clustering and found that we could get similar looking curves when the dimension is high and when the noise was high -- see the lower right panel of Figure 15 if interested. We will edit the paper to point this out and refer readers to the appendix. --- Rebuttal Comment 1.1: Comment: Thanks for the response, I'm happy to raise my score to a 5.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes that the capabilities of a neural network are composed of discrete “quanta” that each help reduce loss on a subset of training examples, and that are learned in approximately decreasing order of frequency. If the quanta are present in a Zipfian (power-law) frequency distribution in the training data, and each quantum is approximately equally valuable in the examples where it’s present, such a theory would provide a mechanistic explanation for power law scaling of loss with respect to model and dataset size. First the authors validate this hypothesis on a toy dataset constructed to have this quantized structure (by being composed of several independent tasks each with a different prefix key). Then they find similar patterns in real language models, and use an algorithm they call “quanta discovery with gradients” to identify potential quanta in these models and their data. Their results are supportive of the quantization hypothesis, but with a high degree of uncertainty. Strengths: The hypothesis is a beautiful and important claim if true, and the paper provides a well presented and solid chunk of evidence that it is. The toy dataset experiment is simple and straightforward and demonstrates that the quantization hypothesis works under ideal conditions, and the LM experiment shows that similar phenomena are also present in real models, while proposing a reasonable initial approach to quanta discovery. The results on monogenic vs. polygenic samples also point pretty clearly towards the validity of quantization. Given QDG or any other model of what specifically the quanta might be, the hypothesis also enables testable predictions about scaling laws. Weaknesses: The multitask parity setting seems too obviously likely to lead to quanta, so it doesn’t seem to prove very much (although it’s useful to set up the framing, almost like a thought experiment). The QDG methodology is also a bit disappointing in a few ways. By clustering the gradients, it assumes that quanta (conceptually defined without reference to model structure) will be localized in the model—which is probably true, but might otherwise have been a testable hypothesis. It’s also too slow for the authors to have applied it to models larger than the smallest Pythia LM, greatly limiting its applicability. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: How much do you believe the hypothesis overall? Might there also be a significant role for “interaction terms” between quanta or are they close to being fully independent? Do you have early ideas about alternatives to QDG, especially alternatives that might be more computationally tractable? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors acknowledge the limitations of their methods and experiments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback and questions! We’re glad you found the paper to be interesting and valuable. We’ll comment on the weaknesses and respond to your questions below: **The multitask parity setting seems too obviously likely to lead to quanta, so it doesn’t seem to prove very much (although it’s useful to set up the framing, almost like a thought experiment).** We agree that the multitask sparse parity dataset is a bit contrived. One of the ways in which it’s valuable is that it shows that the mechanism of power law neural scaling depends on the structure of the data. For instance, in Sharma and Kaplan (2022) “Scaling Laws from the Data Manifold Dimension”, they train neural networks on a set of toy regression problems of varying input dimension and observe power law scaling where the scaling exponent is determined by the dimension of the space that the function is defined on. For those sorts of tasks, their model of neural scaling, which views neural networks as approximating a function defined on some manifold with better and better precision with increasing scale, seems to hold. But our experiments on multitask sparse parity show that one can also get power law scaling in accordance with the Quantization Model when the data is fundamentally discrete, and where neural networks learn an increasing number of discrete computations with increasing scale, rather than globally approximating a function on a manifold with higher precision. So the mechanism of power law scaling depends on the structure of the data. Now when we observe power law scaling in the wild, we can ask whether the Sharma and Kaplan model of scaling or the Quantization Model of scaling, or maybe something else, best describes what’s going on. **The QDG methodology is also a bit disappointing in a few ways. By clustering the gradients, it assumes that quanta (conceptually defined without reference to model structure) will be localized in the model—which is probably true, but might otherwise have been a testable hypothesis. It’s also too slow for the authors to have applied it to models larger than the smallest Pythia LM, greatly limiting its applicability.** Since we cluster using the (almost) full model gradients (across all layers of the network) our method might still work even if computations are spread across large parts of the model. We just assume that gradients will have some consistency among samples where prediction relies on the same pieces of knowledge/computation. We hope that more principled and efficient methods could be developed in future work. **How much do you believe the hypothesis overall? Might there also be a significant role for “interaction terms” between quanta or are they close to being fully independent?** Indeed it might be unrealistic to think of the quanta as being fully independent in language modeling -- great point! In the current draft of the paper, when we talk about samples being polygenic, we imagine something like different circuits pushing the logits in a good direction independently. But realistically, polygenicity is probably more complicated than this. On some samples, a model’s computation might be more integrated, where loss is only lowered if multiple quanta are present simultaneously. In the final draft, we will clarify that polygenicity can be complicated, and the quanta might not lower the loss entirely independently of each other. One interesting biological analogy is the fact that in humans, blue eyes vs. brown eyes is a monogenic trait. However, this gene only matters in the context of many other genes, e.g. the genes that are responsible for humans having eyes in the first place! So it’s sort of implicitly polygenic. Many quanta in LLMs could be similar. A quantum might depend on other quanta, and so would be polygenic in some sense, but deleting it would sharply remove some capability of the model. **Do you have early ideas about alternatives to QDG, especially alternatives that might be more computationally tractable?** Using activations instead of gradients could lower the dimension considerably. Also using random projections of the gradient instead of the full gradients approximately preserves cosine similarity via the Johnson-Lindenstrauss lemma. Furthermore, the idea of clustering samples to discover quanta only makes sense when samples are monogenic. Ultimately, we’d like a scheme which enumerates the quanta and then for a given sample identifies which quanta (possibly many of them) were relevant for prediction on that sample. --- Rebuttal Comment 1.1: Comment: Thanks for the follow-up! I'm glad to see you mentioned adding TinyStories quanta to the appendix in a different rebuttal; I should have suggested something like that too! I think your framing in the first question answer (regarding the Sharma/Kaplan and quantization models) is clearer and more explicit motivation for the multitask parity task than I remember seeing in the paper; could be worth making the point equally directly there.
null
null
null
null
null
null
From One to Zero: Causal Zero-Shot Neural Architecture Search by Intrinsic One-Shot Interventional Information
Reject
Summary: This paper proposes a 0-shot NAS method. The key point is that there are latent factors that can influence the architecture search procedure, making the validation accuracy of one-shot NAS unreliable. The method adopts Gaussian intervention to the data and evaluates each operation's performance to reduce the bias brought by validation dataset sampling. Some experiment on CIFAR-10, NAS-Bench-201, and ImageNet show that the method can efficiently search architectures. Strengths: 1. The paper is well written and easy to follow. 2. The paper considers the latent factors that may influence the NAS validation, which has been hardly considered in existing works. 3. The proposed method uses causal inference techniques to solve the NAS problem. Weaknesses: 1. Some definitions and connotations are unclear in the paper. See Questions for more details. 2. The proposed algorithm is too straightforward. It seems directly use intervention technique in NAS procedure, but do not consider any characteristics of NAS problems, which makes the contribution of the method limited. 3. In the experiment part, the authors claim the efficiency of the method, but lots of 0-shot NAS methods has a very high efficiency. The authors do not compare with those 0-shot NAS methods in this part. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. The paper is inconsistent about the connotation of the latent factors. At Line 127, the paper states that many factors like data distributions, batch sizes, rates of weight decay can affect validation. However, according to Section 3.3, the authors seem to consider only the data distribution as the latent factor. What is the exact connotation of the latent factors considered in your paper? 2. At Line 135, why validation accuracy obeys Gaussian distribution? In my perspective, under some assumptions, the logits can follow a Gaussian distribution. But when it comes to accuracy, it is hard to say it also follows Gaussian Distribution. 3. How to define "true validation accuracy" in Line 140? I tried to understand that the author was trying to say that there is a true distribution of latent factors, and that the results measured in this true distribution are the true accuracy. But how to define the true distribution? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for reviewing this manuscript. Here are some questions: 1. What do you mean by 'definitions'? The validation accuracy is the accuracy of the validation set. 2. What do you mean by 'straightforward;? As far as I know, straightforwardness is a strength in writing. Besides, our proof of the Gaussian distribution of any zero-shot NAS is significant theoretically. 3. We redefine the framework of zero-shot NAS. We have no need to compare with those less efficient works. Ours is the most efficient of all. Here are some answers to your questions: 1. It is a good question, the other factors can be considered but not a must. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Too many unsolved concerns. Thus, I decrease the rating to strong reject.
Summary: This paper formulates zero-shot NAS as a causal-representation-learning. Further, it uses the high-level interventional data from one-shot NAS to facilitate zero-shot NAS to refine the imperfectness. Extensive experiments achieved comparable performance results on multiple benchmarks. Strengths: 1) This paper proposes to use the high-level interventional data to facilitate zero-shot NAS to address the imperfect-information issue. 2) This paper provides theoretical support for the proposed approach. Weaknesses: 1) The novelty is incremental compared to baseline Zen-NAS approach, and also directly applies the Shapley value which is also leveraged in Shapley-NAS.  2) Although the search cost is low, the achieved performance improvement is not significant. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1) Please provide more details about the differences between this work and Zen-NAS、Shapley-NAS, as described in Weakness 1. 2) The experiments should be compared with the baseline methods, including Zen-NAS and Shapley-NAS, as well as with the latest SOTA zero-shot methods such as ZiCo. 3) What about the ranking consistency in different NAS benchmark? 4) Minor Errors a) Line 40: "zeros-shot" -> "zero-shot" b) Line 313: "92.44" -> "92.44%"; Line 314: "41.31" -> "41.31%" Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The novelty is limited, and the achieved performance improvement is limited. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for reviewing this manuscript. Here are some questions: 1. We do not use Zen-NAS as the baseline but build it on our own. Please could you explain why our work is related to Zen-NAS? And why 'the novelty is incremental'? 2. The search cost is significant in NAS. Have you ever read the related works? --- Rebuttal 2: Title: Response to Rebuttal Comment: None of the issues were addressed. Therefore, I decrease my score to Reject.
Summary: This paper presents a causal definition of zero-shot NAS and facilitate this with interventional one-shot knowledge data. The paper theoretically demonstrates the validation information of either a neuron or a neuron 60 ensemble obeys a Gaussian distribution given a Gaussian input. It then uses high level interventional data from one-shot NAS to solve the imperfectness of zero-shot NAS. The zero-cost NAS method is studied on DARTS space and NAS Bench 201 with very low search cost while maintaining comparable test accuracies. Strengths: The paper is well-written and novel. Studying NAS from a causality perspective is original and interesting. Weaknesses: - The main weakness in my opinion is the weak evaluation of the approach. [NAS-Bench Suite Zero](https://arxiv.org/pdf/2210.03230.pdf) provides easy access and evaluation on about 13 proxies and 28 tasks. Since the performance of a given proxy can vary widely depending upon the search space, task and datasets I am not convinced of the effectiveness of the proxy based on only the results on NATS-Bench and DARTS spaces. Furthermore how would one apply this method to transformer spaces (eg [AutoFormer](https://openaccess.thecvf.com/content/ICCV2021/papers/Chen_AutoFormer_Searching_Transformers_for_Visual_Recognition_ICCV_2021_paper.pdf) , [HAT](https://arxiv.org/abs/2005.14187)) and MobileNets(eg: [OFA](https://arxiv.org/pdf/1908.09791.pdf), which prove queriable validation accuracy based on a surrogate. Can this method be applied in these spaces? Since modern one-shot NAS methods are applied on transformer and mobilenet spaces too, it is important to design a proxy that does indeed generalize well. I encourage the authors to evaluate their method on these search spaces too. - The results in table 1 and table 2 show a drop in accuracy ( though the num params is lower and search cost is low). Hence I am not very convinced of the effectiveness of the method in finding effective architectures. - Since reproducibility is quite important in NAS, I think it is very important for the authors to release their code (I couldn't find the code attached). - Correlation coefficient of the ranking with the true ranking is not studied Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Most questions are covered in the weakness part. My questions are as summarized below: 1. Could the authors evaluate their method on all the tasks in [NAS-Bench Suite Zero](https://arxiv.org/pdf/2210.03230.pdf) and discuss about the applicability of the method to transformer spaces? 2. Could the authors report the correlation coefficient of the predicted ranking with the true ranking ? 3. Would the authors be releasing the code? 4. Could the authors confirm that the best practices [here](https://www.automl.org/nas_checklist.pdf) are followed? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: I encourage the authors to discuss the limitations of the proposed method more extensively for eg: assumptions which are architecture type specific? search space generality? Any assumptions which may not hold in practice? Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for reviewing this manuscript. We will consider the dataset you mentioned. However, there are too many zero-shot neural architecture search datasets. I don't think it is a must. Anyway, thank you for this suggestion. --- Rebuttal Comment 1.1: Comment: I have read the response and most of my concerns still remain. I keep my score
Summary: This paper proposes a causal zero-shot neural architecture search (NAS). The NAS problem is decomposed into two components: ensemble selection and neuron selection. By employing the Gaussian intervention to approximate validation accuracy, the authors adapt the perturbation-based approach from DARTS+PT to search for architectures. The performance of their proposed Causal-Znas is evaluated on both NAS-Bench-201 and DARTS search spaces. Strengths: - It is reasonable to consider the one-shot interventional information for zero-shot NAS. Weaknesses: - The overall contribution of this research appears to be limited. The combination of zero-shot NAS with DARTS+PT is not considered novel and does not provide significant insights. - There are some omissions of recent related works in Section 2.2, such as GradSign[1] and ZiCo[2]. - The experimental comparison provided is insufficient. The results of Zen-NAS are missing in the comparison, and it would be beneficial to include a comparison with GradSign and ZiCo. Additionally, recent works often conduct experiments on other tasks such as NLP and ASR, which could provide further insights. [1] Zhihao Zhang and Zhihao Jia. Gradsign: Model performance inference with theoretical insights. In ICLR, 2022. [2] Guihong Li, Yuedong Yang, Kartikeya Bhardwaj, and Radu Marculescu. Zico: Zero-shot nas via inverse coefficient of variation on gradients. In ICLR, 2023. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please see the Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for reviewing this manuscript. I am very pleased that you mentioned some related works in Zeo-shot NAS. Here are some questions: 1. Please could you explain the way our work is related to DARTS+PT? We have not found a clue yet. 2. Does ZiCo a common theoretical approach or just an incremental one compared to Zen-NAS?
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Projection Regret: Reducing Background Bias for Novelty Detection via Diffusion Models
Accept (poster)
Summary: Recent methods have mainly utilized the reconstruction property of in-distribution samples to detect OOD by diffusion models. However, they often suffer from detecting OOD samples that share similar background information to the in-distribution data. Based on the observation that diffusion models can project any sample to an in-distribution sample with similar background information, the paper proposes Projection Regret (PR), an efficient novelty detection method that mitigates the bias of non-semantic information. To be specific, PR computes the perceptual distance between the test image and its recursive diffusion-based projection to detect the abnormality. Strengths: 1. The paper is written well and is easy to understand. 2. The studied problem is very important. 3. The results seem to outperform state-of-the-art. Weaknesses: 1. From the introduction, the projection cannot change the background information too much, which seems to be harmful on near-OOD detection, such as Cifar10 vs Cifar100. Since c100 and c10 has a similar background, then wouldn't this projection create a barrier further for solving the OOD detection problem? 2. The design and hyperparameter search (including the ensemble) seems to be very important on the results. How do the authors get the best configuration on a new ID/OOD pair? Is there any sensitivity analysis on more pairs? i.e., doe the current Figure 3 generalize to most ID/OOD pairs? 3. The computational budget is a little bit concerning in the algorithm. Is there any concrete comparison or explanation? 4. More results on large-scale benchmarks (i.e., ImageNet) and large (diffusion) models seem to be more meaningful for practical concerns and interpretability. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: see above Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer SZQH, We sincerely thank you for your helpful feedback and insightful comments. In what follows, we address your concerns one by one. ___ **[Q1]** From the introduction, the projection cannot change the background information too much, which seems to be harmful on near-OOD detection, such as Cifar10 vs Cifar100. Since c100 and c10 has a similar background, then wouldn't this projection create a barrier further to solving the OOD detection problem?\ **[A1]** We clarify that our projection of an out-of-distribution (OOD) sample changes its semantics from OOD to in-distribution (see Figure 1(a)). Since our detection score is based on the semantic distance metric between a test image and its projection, our algorithm will output a high abnormality score on the OOD dataset even if the background information is similar. This claim is evidenced by our empirical results: our algorithm significantly outperforms baselines on CIFAR-10 vs CIFAR-100 (see Table 1). To further address your concern, we additionally experiment with an additional OOD benchmark, the ColorMNIST dataset [1,2] where spurious OOD samples have **the same background statistics** (i.e., same background colors) but different semantics (i.e., different digits) compared to the in-distribution data. In this dataset, our algorithm achieves near-perfect OOD detection performance as shown in Table 6 (see Global Response PDF). These experimental results verify that our method can detect OOD samples that have similar backgrounds to the in-distribution samples. ___ **[Q2]** The design and hyperparameter search (including the ensemble) seems to be very important in the results. How do the authors get the best configuration on a new ID/OOD pair? Is there any sensitivity analysis on more pairs? i.e., doe the current Figure 3 generalize to most ID/OOD pairs?\ **[A2]** We choose the hyperparameters for each in-distribution (ID) data without any access to out-of-distribution (OOD) samples as described in footnote 1 on page 5. To be specific, we consider rotated in-distribution samples as synthetic OOD ones and choose the best hyperparameters in this synthetic OOD detection task. We find that the hyperparameters selected from the synthetic task are well generalized across other OOD datasets, as we verified in our experiments. Therefore, for a new ID dataset, one can find the best configuration itself, and the configuration can be used across various OOD pairs. For your information, we further provide the sensitivity analysis of Projection Regret on more ID/OOD pairs: CIFAR-10 vs (SVHN, LSUN, ImageNet) datasets in Figure 6 (see Global Response PDF). Somewhat interestingly, the hyperparameter chosen by our synthetic OOD detection task ($\alpha$=9, $\beta$=8) also achieves the best performance on LSUN and ImageNet datasets or a small gap (0.005 AUROC) against the best performance on the SVHN dataset. We further improve this by ensembling across multiple configurations and the ensemble outperforms the best-searched hyperparameter (see Table 3). Finally, as the reviewer asked about the generalization property of Figure 3 to other ID/OOD pairs, we also test our projection with varying timesteps. As shown in Figure 6 (see Global Response PDF), the trend in the LSUN and ImageNet OOD datasets shows a similar trend to CIFAR-100 and SVHN shown in Figure 3. Therefore, we think the hyperparameters are not sensitive to ID/OOD pairs, which is a merit of our framework. ___ **[Q3]** The computational budget is a little bit concerning in the algorithm. Is there any concrete comparison or explanation?\ **[A3]** Our algorithm is efficient because it only requires three projections and each projection can be computed by one forward pass using a consistency model [3] that enables one-shot generation, as described in Algorithm 1. As a result, even with our ensemble technique described in L162-L167, our algorithm is 1.8x faster than the second-best baseline, LMD [4] (0.395s vs 0.705s per sample, respectively). ___ **[Q4]** More results on large-scale benchmarks (i.e., ImageNet) and large (diffusion) models seem to be more meaningful for practical concerns and interpretability.\ **[A4]** To show our algorithm’s scalability in large-scale datasets with high-resolution images, we construct an OOD detection task on the LSUN domain. Specifically, we set the bedroom class of LSUN as an in-distribution dataset and apply the pre-trained consistency model to detect bridge/church/classroom OOD datasets. We also experiment with LMD as a baseline. As reported in Table 8 (see Global Response PDF), our method significantly outperforms LMD by a large margin across all large-scale LSUN OOD detection tasks. We also observe that our method achieves better AUROC in relatively far OOD datasets (bridge, church) against near OOD datasets (classroom). Hence, this additional experimental result shows the scalability and the potential of practical applications. ___ **References** \ [1] Invariant risk minimization, arXiv 2019\ [2] On the impact of spurious correlation for out-of-distribution detection, AAAI 2022\ [3] Consistency Models, ICML 2023\ [4] Unsupervised out-of-distribution detection with diffusion inpainting, ICML 2023 --- Rebuttal Comment 1.1: Title: Acknowledgement Comment: Thanks for your response to address my questions. I have increased the score to 5. --- Reply to Comment 1.1.1: Comment: We are glad that our responses addressed your concerns. We will incorporate the additional results in the final manuscript. Thank you! Authors.
Summary: This paper discusses a method in machine learning known as Novelty Detection, which is used to identify abnormal or out-of-distribution (OOD) samples. The authors suggest that diffusion models, a popular generative framework due to their strong generation performance, has recently become an attractive tool for novelty detection. However, the authors note a problem: while these models excel at generating high-quality results, they often struggle to detect OOD images with similar backgrounds. This issue, referred to as 'background bias', can lead to inaccurate novelty detection. To address this issue, the paper proposes 'Projection Regret' (PR), a method which reduces the impact of background bias and improves the accuracy of novelty detection. PR calculates the perceptual distance between a test image and its 'projection' (an in-distribution sample with similar background information created by the diffusion model). To further mitigate the effect of dominant background information, PR uses recursive projections to cancel out the background bias. The paper also introduces an ensemble of multiple projections for improved detection performance, calculated efficiently via a consistency model. A new perceptual distance metric using underlying features of the diffusion model is proposed and compared to other metrics, showing promising results. In conclusion, the paper's main contributions are: 1. Identifying the issue of background bias in novelty detection via diffusion models. 2. Proposing a solution, Projection Regret (PR), which mitigates this bias and enhances OOD detection. 3. Introducing an alternative perceptual distance metric computed from decoder features of the pre-trained diffusion model. Extensive experiments demonstrate that PR significantly outperforms previous diffusion-based novelty detection methods, showing potential for future applications. Strengths: **Originality:** The paper presents a novel method called Projection Regret (PR) to address the problem of background bias in novelty detection using diffusion models. This is a creative combination of existing ideas, especially the use of perceptual distance and recursive projections to cancel out dominant background information. The proposed alternative perceptual distance metric using decoder features of the pre-trained diffusion model also adds to the originality. **Quality:** The research appears to be of high quality. The authors have carried out extensive experiments to demonstrate the effectiveness of their proposed method. They have also compared PR with other existing methods for novelty detection, showing that it outperforms them by a significant margin. **Clarity:** The paper is well written and structured. The authors clearly outline the problem of background bias in novelty detection, explain the concept behind PR, and provide detailed explanations of their experimental setup and results. They also do a good job of explaining complex concepts in an understandable way. **Significance:** The significance of this work lies in its potential to improve the accuracy of novelty detection in machine learning, which has broad applications in many fields, including medical diagnosis, autonomous driving, and forecasting. By addressing the issue of background bias, the proposed PR method could enhance the reliability and safety of deep learning applications. Additionally, the proposed perceptual distance metric could serve as a useful tool for researchers working on similar problems in the future. Weaknesses: While the paper presents a significant contribution to the field, there are a few areas where it could be improved: **1. Evaluation metrics:** While the paper does an excellent job of comparing with existing methods and demonstrating the superiority of PR, it might benefit from more diverse evaluation metrics. At present, the paper primarily focuses on detection accuracy (AUROC). Incorporating additional measures such as precision-recall curves or F1 scores could provide a more comprehensive evaluation. **2. Real-world applications:** The paper demonstrates PR's effectiveness using standard datasets (CIFAR-10 vs CIFAR-100 etc.), but it would be valuable to see how PR performs in real-world scenarios. It is crucial to know how the method copes with the complexity and variability found in actual application scenarios including medical imaging or self-driving car data. **3. Deeper exploration of background bias:** Although the authors have proposed an innovative solution to tackle the issue of background bias, they could delve deeper into this problem. Understanding the nature of this bias, its origins, and why it is particularly problematic for diffusion models can further enhance the paper's impact. **4. Computational efficiency:** While the authors mention that PR can be calculated efficiently via a consistency model, specific data about computational requirements, time complexity, and scalability is missing. Providing these details can help readers assess whether PR is suitable for their specific use-cases, especially those that require real-time processing or deal with large-scale datasets. **5. Robustness analysis:** The paper could benefit from a detailed robustness analysis of PR against various types of noise and distortions. This would allow potential users to understand the limits of the approach and potential pitfalls in practical applications. By addressing these points, the paper could strengthen its contributions and appeal to a broader audience within the machine learning community. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: **Questions:** 1. Could you provide more details on the computational requirements of PR? For instance, what is the time complexity and how does it scale with the size of the dataset? 2. Would PR be equally effective in real-world applications where data might be more complex or noisy compared to standard datasets like CIFAR-10 or CIFAR-100? 3. Could you delve deeper into the issue of background bias? Understanding its origins and why it affects diffusion models specifically could strengthen the paper. 4. Have you investigated the robustness of PR against various types of noise and distortions? If not, do you anticipate that the method would be robust against such challenges? **Suggestions:** 1. Consider incorporating additional evaluation metrics such as precision-recall curves or F1 scores for a more comprehensive evaluation of PR's performance. 2. It would be beneficial to demonstrate how PR performs with real-world datasets, such as medical imaging data or self-driving car sensory data. This would help readers understand its applicability and effectiveness in practical scenarios. 3. Detailed information about computational efficiency, including time complexity and scalability aspects, would be useful for potential users assessing suitability for their specific use-cases. 4. Conducting a robustness analysis against various types of noise and distortions would further validate the efficacy of PR under different challenging conditions. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors have addressed some limitations of their work, particularly the issue of background bias in diffusion models and how their proposed method, Projection Regret (PR), mitigates this. However, a few additional limitations could be further discussed: 1. **Computational Resources:** While the authors mention that PR can be calculated efficiently, the paper does not provide specific details on computational requirements or scalability. As advanced machine learning models often require significant computational resources, an evaluation of this aspect would be helpful for potential users. 2. **Robustness:** The paper does not explicitly discuss the robustness of PR to noise or distortions in the data. Given that real-world data can often be noisy or imperfect, discussing the limitations of PR in handling such situations would be beneficial. Regarding the broader societal impacts, the paper does not directly address this point. While novelty detection in machine learning can have multiple positive societal impacts, like enhancing medical diagnosis or improving safety in autonomous driving, it might also have potential negative impacts: 1. **Privacy concerns:** Improving the ability of machines to identify novel information could potentially lead to privacy concerns, as more accurate models might be misused for surveillance or unauthorized data collection. 2. **System Misuse:** In high-stakes applications, an over-reliance on automated novelty detection systems that might still make mistakes could lead to serious consequences. The authors could improve their paper by addressing these points, possibly in a new section dedicated to limitations and broader societal impacts. This would help readers appreciate the full context of the research, including its potential downsides. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer 9KBe, We sincerely thank you for your helpful feedback and insightful comments. In what follows, we address your concerns one by one. ___ **[Q1]** Could you provide more details on the computational requirements of PR? For instance, what is the time complexity, and how does it scale with the size of the dataset?\ **[A1]** As shown in Algorithm 1, our algorithm (Projection Regret or PR) performs 3 forward computations of the consistency model. Therefore, the time complexity is linear to the size of the dataset. In a real-time setting, our algorithm takes 0.395s/sample in the CIFAR-10 dataset and the consistency model version of LMD [1] takes 0.705s on the same dataset. ___ **[Q2]** Would PR be equally effective in real-world applications where data might be more complex or noisy compared to standard datasets like CIFAR-10 or CIFAR-100?\ **[A2]** We scale Projection Regret into a large-scale LSUN domain where the dataset is more complex. To be specific, we set the bedroom class as the in-distribution (ID) dataset and set bridge/church/classroom classes as the out-of-distribution (OOD) dataset. We summarize the OOD detection result in Table 8 (see Global Response PDF). Our method outperforms the second-best method, LMD [1], with a significant gap. This result shows the potential of PR in real-world applications. ___ **[Q3]** Could you delve deeper into the issue of background bias? Understanding its origins and why it affects diffusion models specifically could strengthen the paper.\ **[A3]** The background bias issue originates from the fact that we do not have a perfect distance metric that only captures the semantic difference between two images. While the LPIPS distance metric [2] mainly used in our work captures such a semantic change better than $\ell_{2}$ distance, it often captures other information (e.g., background), especially when the main object is relatively small (see L49-L50). Furthermore, when we apply the metric on various datasets that significantly differ from ImageNet, e.g., SVHN or LSUN, it is not likely that the LPIPS distance metric fully captures the semantic change between the input data and its projection. In such scenarios, the distance metric could be biased by the background information as shown in Figure 1(b). Hence, we think the background bias issue is problematic due to an imperfect distance metric. To reduce the background bias, we propose Projection Regret using the unique characteristic of diffusion models: diffusion-based projection can map any OOD sample to an ID one with a similar background. Therefore, we rather regard diffusion models (or consistency models) as the remedy to compensate for the imperfect distance metric. Furthermore, it is evident that OOD detection methods based on alternative generative models, e.g. GANs or GLOW, are also vulnerable to OOD with a similar background (see CIFAR-10 vs CIFAR-100 OOD detection results in Table 2). Hence, we think background bias is not limited to the diffusion model but a common problem of unsupervised OOD detection methods. ___ **[Q4]** Have you investigated the robustness of PR against various types of noise and distortions? If not, do you anticipate that the method would be robust against such challenges?\ **[A4]** We appreciate the reviewer for the motivating suggestion. We further test PR’s robustness under the corruption of the in-distribution data. To be specific, we set **corrupted** CIFAR-10 data [3] as pseudo-in-distribution data and perform an OOD detection task against **uncorrupted** SVHN/CIFAR-100/LSUN/ImageNet as OOD datasets. For the corruption strategy, we use (shot noise, defocus blur, fog, and elastic transform) for the representative strategy of (noise, blur, weather, and digital) in varying corruption levels of (1,2,3). We provide the visualization of the corrupted data in Figure 5 (see Global Response PDF). We present the result in Table 9 (see Global Response PDF). As corruption intensity increases, the OOD detection performance decreases. While strong corruptions (e.g. shot noise, fog) deteriorate OOD detection performance significantly, our method is relatively robust to weak corruption (e.g. defocus blur). Hence, our method would be at least robust to imperceptible corruption. ___ **[Q5]** Consider incorporating additional evaluation metrics such as precision-recall curves or F1 scores for a more comprehensive evaluation of PR's performance.\ **[A5]** Widely used popular metrics on OOD detection (TNR at 95\% TPR, detection accuracy, AUPR_IN, and AUPR_OUT) are included in the CIFAR-10 vs (SVHN, CIFAR-100, LSUN, ImageNet) OOD detection experiments in Table 10 (see Global Response PDF). The result is consistent and we will add the full results in the final manuscript. ___ **[Q6]** Improving the ability of machines to identify novel information could potentially lead to privacy concerns, as more accurate models might be misused for surveillance or unauthorized data collection. In high-stakes applications, an over-reliance on automated novelty detection systems that might still make mistakes could lead to serious consequences.\ **[A6]** We appreciate the reviewer for elaborating on the limitations of our method applied in real-life scenarios. We will incorporate this discussion into the limitation section in the final draft. ___ **References**\ [1] Unsupervised out-of-distribution detection with diffusion inpainting, ICML 2023\ [2] The unreasonable effectiveness of deep features as a perceptual metric, CVPR 2018\ [3] Benchmarking neural network robustness to common corruptions and perturbations, ICLR 2019 --- Rebuttal Comment 1.1: Comment: I acknowledge I have read the rebuttal. --- Reply to Comment 1.1.1: Comment: We thank you for your time to review our paper and read our rebuttal. We sincerely appreciate (i) your acknowledgment of our work to **“present significant contribution to the field”** and (ii) your extensive suggestions to **“strengthen our contributions and appeal to the broader audience”**. As we strongly believe that we have successfully addressed all questions and concerns raised by the initial review, we politely ask you to consider feedback on our responses or update the score. Since we still have time remaining for the author-reviewer discussion period, please feel free to ask any further questions. Thanks. Best, \ Authors
Summary: This paper presents a novel approach to detecting and handling novelty in data using a generative diffusion model, with a specific focus on addressing biased backgrounds. The proposed method aims to transform noisy samples into perfect ones by leveraging the capabilities of the diffusion model. The central idea is that by reversing the effects of noise on an inlier sample (x+noise), the resulting output will closely resemble the original sample (X). Conversely, if X were to be treated as an out-of-distribution (OOD) sample, the diffusion model trained on similar OOD backgrounds could project the samples onto a background resembling the inlier data. Unlike traditional methods, such as auto-encoders that rely on reconstruction error, this approach emphasizes the significance of high and meaningful reconstruction errors for anomaly detection. Since the primary focus is on the foreground, which constitutes a small portion of the images, the reconstruction error becomes a more reliable indicator of anomalies. To validate the effectiveness of the proposed method, the authors conducted evaluations on various datasets, including CIFAR-10, CIFAR-100, SVHN, and ImageNet. Comparative analyses were performed against existing generative models commonly employed for novelty detection. Strengths: Recently, there has been growing concern that existing methods for image classification, which are generally not specific to one-class classification or novelty detection tasks, face significant challenges. Additionally, studies in the field have highlighted the problem of bias in the background for novelty detection. In response, this paper addresses this ongoing challenge by proposing a solution. The paper leverages the reverse step of the diffusion model to restore noisy outliers, with the goal of aligning the background of these recovered samples more closely with the inlier samples. This approach is intriguing and tackles a specific aspect of the problem. The results demonstrate the effectiveness of the proposed idea. Weaknesses: One weakness of this paper is that the main idea of using projection and reconstruction error for novelty detection is not novel. There have been previous papers, such as "Adversarially One-Class Classifier for Novelty Detection" (CVPR 2018), that have presented similar ideas but with different approaches. Thus, the high-level novelty of the proposed method is questionable. However, the method is novel. Furthermore, the paper claims the method is less biased towards the background. However, the datasets used for evaluation, such as CIFAR and SVHN, are not known for having extreme biases toward the background. To address the bias problem effectively, evaluating the method on datasets that exhibit strong biases in the background would be more appropriate, such as "Hard ImageNet: Segmentations for Objects with Strong Spurious Cues." Other papers specifically tackle the issue of spurious correlations and out-of-distribution detection, such as "On The Impact Of Spurious Correlation For Out-of-distribution Detection" by Yifei Ming (2021). It would be valuable to compare the results of the proposed method with those papers that specifically address the bias problem rather than just comparing against other generative models. Moreover, the proposed method may fail in cases where the OOD samples have backgrounds that are very similar to the inliers but differ in the main concept or foreground. In such scenarios, the background would be ideally recovered using the proposed method, leading to false positives where OOD samples are detected as inliers. At the moment there are a lot of methods that try to handle the same problem to the reconstruction error can be a reliable score for novelty detection for example, see the Mem-autoencoder paper for novelty or anomaly detection. Addressing these weaknesses and providing a more thorough comparison with relevant papers and datasets would strengthen the overall contribution of the proposed method. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: The idea and problem mentioned in this paper are interesting. However, the authors need to consider the following points: (1) They should review documents in the field of fairness for anomaly detection and those that attempt to mitigate bias. (2) There are benchmarks for evaluating methods biased against the background, such as Hard ImageNet. While these datasets may not be explicitly designed for out-of-distribution (OOD) detection, evaluating the proposed method on such data would be beneficial. Although CIFAR-10 and CIFAR-100 not have meaningful background concepts, additional evaluations on datasets with more pronounced background biases would provide valuable insights. (3) The comparison with existing methods needs improvement." Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer ZGie, We sincerely thank you for your helpful feedback and insightful comments. In what follows, we address your concerns one by one. ___ **[Q1]** One weakness of this paper is that the main idea of using projection and reconstruction error for novelty detection is not novel. There have been previous papers, such as "Adversarially One-Class Classifier for Novelty Detection" [1], that have presented similar ideas but with different approaches. Thus, the high-level novelty of the proposed method is questionable. \ **[A1]** We ​​politely disagree with the reviewer’s opinion: our main idea is to use the **“recursive projection”** and the **“regret term”**, and it is clearly different from the previous works, e.g., [1]. As we discussed in Section 3.2, a simple reconstruction approach like [1] may fail to capture semantic changes when background information is dominant. Hence, our idea plays a crucial role in novelty detection as we empirically verified in our ablation experiment (Table 3). We strongly believe that our work is a non-trivial extension of the previous reconstruction-based approaches, i.e., our idea is novel and effective compared to them. We respectfully ask you to reconsider your position on our high-level novelty. ___ **[Q2]** Furthermore, the paper claims the method is less biased towards the background. However, the datasets used for evaluation, such as CIFAR and SVHN, are not known for having extreme biases toward the background. To address the bias problem effectively, evaluating the method on datasets that exhibit strong biases in the background would be more appropriate, such as "Hard ImageNet: Segmentations for Objects with Strong Spurious Cues." ...\ **[A2]** Thank you for the valuable suggestion on experiment setups. As you suggested, we validate the effectiveness of our method on a spurious out-of-distribution (OOD) benchmark, ColorMNIST [2], presented by Ming et al. [3] where digits are given as semantic information. In this benchmark, different digits with the same background color are given as spurious OOD samples. Furthermore, in-distribution digits and background information are highly correlated (r=0.45) in the training setup. As we reported in Table 6 (see Global Response PDF), our method significantly outperforms [3] even without using label information. This is because our method relies on generative modeling which aims to learn every pixel information rather than collapsing into a shortcut solution. It is also worth noting that the methods [2-3] that address the bias problem in classifiers cannot be directly applied to unsupervised OOD detection since the above methods require label information. We believe this discussion further strengthens our contribution, especially for various practical and challenging scenarios. We will incorporate this experimental result and discussion into the final manuscript. ___ **[Q3]** Moreover, the proposed method may fail in cases where the OOD samples have backgrounds that are very similar to the inliers but differ in the main concept or foreground. In such scenarios, the background would be ideally recovered using the proposed method, leading to false positives where OOD samples are detected as inliers.\ **[A3]** We clarify that such a scenario is not our failure case because our detection score is based on the distance between a test image and its projection. If an out-of-distribution (OOD) sample has background statistics that are very similar to the inliers but differ in the main concept, then our projection of the OOD sample becomes similar to an inlier one, which leads to a large distance between the OOD sample and its projection due to their semantic distance. In contrast, an in-distribution sample has a small distance because its projection has similar semantic information. This claim is also supported by our experimental results. First, although the ColorMNIST dataset [2] presented in [A2] includes such OOD samples (i.e., different digits but the same background colors), our method achieves near-perfect OOD detection performance as reported in Table 6 (see Global Response PDF). In addition, we think that CIFAR-10 vs CIFAR-100 is another representative task of discriminating OOD samples with similar backgrounds. In this task, we outperform the previous best diffusion-model-based method, LMD [4], by a large margin. These experimental results verify that our method can detect OOD samples that have similar backgrounds to the inliers. ___ **[Q4]** Comparison against Mem-autoencoder paper.\ **[A4]** We appreciate you introducing the related work, Mem-autoencoder (MemAE) [5]. For the comparison against [5], we experiment with Projection Regret and other competitive baselines in the one-class classification task on CIFAR-10 and report their results in Table 7 (see Global Response PDF). Our method greatly outperforms the reconstruction-based baselines, including MemAE, across all classes. For example, Projection Regret outperforms MemAE by 25\%. ___ **[Q5]** Providing a more thorough comparison with relevant papers and datasets would strengthen the overall contribution of the proposed method.\ **[A5]** We appreciate the reviewer for the introduction of related fields and methods. We will include the related research on spurious correlations [2] and reconstruction-error-based novelty detection methods [1,5] further in the final manuscript. ___ **References**\ [1] Adversarially learned one-class classifier for novelty detection, CVPR 2018\ [2] Invariant Risk Minimization, Arxiv 2019\ [3] On the impact of spurious correlation for out-of-distribution detection, AAAI 2022\ [4] Unsupervised out-of-distribution detection with diffusion inpainting, ICML 2023\ [5] Memorizing normality to detect anomaly: memory-augmented deep autoencoder for unsupervised anomaly detection, ICCV 2019 --- Rebuttal Comment 1.1: Title: upgrade my evaluation. Comment: Thank you for your response. While many of my concerns have been addressed by the authors, there seems to be a lingering issue regarding (1) the novelty of the approach. I concur with the authors' assertion that their method diverges from a simple reconstruction-based model, instead employing a more refined approach with superior techniques. Additionally, (2) the computational costs associated with this method are notably higher than their predecessors. However, I still find the concern regarding novelty valid. Although the method is functional (and the perception of novelty can be pretty subjective), I have decided to revise and upgrade my evaluation. --- Reply to Comment 1.1.1: Comment: Thank you for your additional comments. We further experiment with reducing the computational cost of our algorithm. We add the new results in the official comment section. We will incorporate the discussion and reflect on other reviewers' feedback in the final manuscript.
Summary: This paper proposes Projection Regret (PR) to mitigate the bias of background information for novelty detection. As an effective perceptual distance, it is able to detect abnormality by reducing the effect of dominant background using recursive projections. Experimental results show the effectiveness of the proposed novelty detection framework. Strengths: 1. Good motivation to explain the background bias problem. 2. well written and easy to follow this paper. 3. The effectiveness of Projection Regret. Weaknesses: My main concerns lie in the following two aspects: (1) About the detection running time. Diffusion models are used in the proposed method, which leads to the increase of inference time when detecting the abnormality of test images. (2) It lacks experimental comparison on other practical industry product detection tasks, such as MVTecAD and BTAD benchmark, which seem to be more challenging to evaluate the effectiveness of the proposed method. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Refer to the Weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer mygo, We sincerely thank you for your helpful feedback and insightful comments. In what follows, we address your concerns one by one. ___ **[Q1]** About the detection running time. Diffusion models are used in the proposed method, which leads to an increase of inference time when detecting the abnormality of test images.\ **[A1]** During inference, our method (Projection Regret, PR) is very efficient because PR only requires three projections and each projection can be computed by one forward pass using a consistency model [1] that enables one-shot generation, as described in Algorithm 1. The second-best baseline, LMD [2], on the other hand, requires multiple (sequential) sampling steps for inpainting. As a result, PR is 1.8x faster than LMD. --- **[Q2]** It lacks experimental comparison on other practical industry product detection tasks, such as MVTecAD and BTAD benchmark, which seem to be more challenging to evaluate the effectiveness of the proposed method. \ **[A2]** We first note that industrial anomaly detection (IAD) like MvTecAD and BTAD benchmarks are not within our scope since they differ from our target task: unsupervised novelty detection (UND), also known as out-of-distribution (OOD) detection. To be specific, UND aims to detect OOD (i.e., novel or unseen) semantic categories/shapes, while the IAD task aims to detect a localized defect in a pixel-wise manner. Therefore, their sample distributions are also significantly different. For example, OOD samples of the IAD task only differ in the small localized region against the ID samples, but OOD samples of the UND task may differ in shapes, background, and so on. Due to the aforementioned reasons, the UND and IAD tasks have been investigated in different research directions, e.g., formulating why the novelty detection model overfits to non-semantic information in UND [3] and formulating abnormality score in local pixels in IAD [4,5]. We think that IAD is an interesting extension of our work and leave the extension for future work. ___ **References** \ [1] Consistency models, ICML 2023 \ [2] Unsupervised out-of-distribution detection with diffusion inpainting, ICML 2023 \ [3] Novelty detection via blurring, ICLR 2020 \ [4] Towards total recall in industrial anomaly detection, CVPR 2022 \ [5] Pushing the limits of few-shot anomaly detection in industry vision: Graphcore, ICLR 2023 --- Rebuttal Comment 1.1: Title: After the rebuttal Comment: Thanks for the authors' response. In the rebuttal. the authors cannot well address my concerns, w.r.t. running time and comparison problems. Although the proposed method seems to be novel, addressing my concerns is important to further evaluate the effectiveness of the proposed method. In the evaluated datasets, I think they are less biased toward the background. I will decrease my initial score to borderline accept. --- Reply to Comment 1.1.1: Comment: Dear Reviewer mygo, We appreciate your additional feedback on the rebuttal. We would like to further discuss with you about your concerns, especially on (1) the running time and (2) the evaluated datasets. ___ **[Q1]** About the running time \ **[A1]** In the following official comment (https://openreview.net/forum?id=3qHlPqzjM1&noteId=lKGqSQ3GLx), we summarize that our method can show significant acceleration (8.7x) compared to the second-best baseline, LMD [1]. Hence, we want to emphasize that our method is efficient compared to the existing out-of-distribution (OOD) detection methods that use diffusion models. ___ **[Q2]** Although the proposed method seems to be novel, addressing my concerns is important to further evaluate the effectiveness of the proposed method. In the evaluated datasets, I think they are less biased toward the background. \ **[A2]** We note that Reviewer ZGie and Reviewer SZQH questioned similar concerns on the evaluation. As such, in the rebuttal (https://openreview.net/forum?id=3qHlPqzjM1&noteId=gDN0zwDfDL), we performed the new experiment where the OOD dataset has the same background information as the in-distribution dataset. In such a dataset, our method shows significant performance gain (see Table 6 on Global Response PDF). Furthermore, the OOD detection methods trained in the CIFAR-10 dataset show underwhelming detection performance in detecting the CIFAR-100 OOD dataset where both methods share similar background information. Our method greatly increases the performance in such a dataset (see Table 1). It is also worth noting that reviewer ZGie acknowledged that our rebuttal addressed the concerns (update: both reviewers acknowledge our rebuttal). Furthermore, OOD detection and industrial anomaly detection (IAD) have been researched in separate directions. We strongly note that the application of OOD detection methods in the IAD benchmark is not a standard practice. For example, every baseline paper [1-6] that we compared does not experiment on such datasets. Nevertheless, in the rebuttal, we scaled our methods to the large-scale dataset (e.g. LSUN) and observed significant performance gain (see Table 8 in Global Response PDF). We ask the reviewer to keep this in mind and reconsider your concerns about the evaluation of such datasets. ___ In summary, we politely ask the reviewer to discuss 1) why our evaluated datasets are less biased toward the background even though we experimented on the dataset with the same background information, 2) why comparison against the IAD dataset is the issue where we also experimented on the large-scale dataset and none of the baseline methods did, and 3) why our computation time is the issue even though we observed significant gain on the computational cost compared to the second-best diffusion-based OOD detection method. We hope this discussion would clarify your concerns and please feel free to ask any further questions. ___ **References**\ [1] Unsupervised out-of-distribution detection with diffusion inpainting, ICML 2023\ [2] Input complexity and out-of-distribution detection with likelihood-based generative models, ICLR 2020\ [3] Likelihood regret: an out-of-distribution detection score for variational autoencoder, NeurIPS 2020\ [4] Multiscale score matching for out-of-distribution detection, ICLR 2021\ [5] VAEBM: a symbiosis between variational autoencoders and energy-based models, ICLR 2021\ [6] Guiding energy-based models via contrastive latent variables, ICLR 2023
Rebuttal 1: Rebuttal: Dear reviewers and ACs, We sincerely appreciate your valuable time and effort spent reviewing our manuscript. As reviewers highlighted, our work aims at an important problem **(Reviewer ZGie, 9KBe, SZQH)** with an interesting/novel method **(Reviewer ZGie,9KBe)**, strong empirical results **(Reviewer mygo, ZGie, 9KBe, SZQH)**, and a well-written/easy-to-follow writeup **(Reviewer mygo, 9KBe, SZQH)**. We appreciate your constructive comments on our manuscript. We have carefully addressed the comments with the following additional discussions and experiments: - **[Reviewer ZGie, SZQH]** Performance of Projection Regret on spurious OOD datasets with the same background information (Table 6). - **[Reviewer ZGie]** Performance of Projection Regret in CIFAR-10 one-class classification benchmark (Table 7). - **[Reviewer 9KBe, SZQH]** Performance of Projection Regret applied to larger diffusion models with larger dataset size (Table 8). - **[Reviewer 9KBe]** Robustness analysis on Projection Regret against common corruptions (Table 9, Figure 5). - **[Reviewer 9KBe]** Performance of Projection Regret measured in alternative metrics (Table 10). - **[Reviewer SZQH]** Sensitivity analysis of Projection Regret and the motivation experiment extended to multiple OOD datasets (Figure 6). We hope our response sincerely addresses all the reviewers’ concerns. Thank you very much. Best regards, \ Authors. Pdf: /pdf/477f624bdb3591b476446a8963c6d83a949adb3f.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
ReHLine: Regularized Composite ReLU-ReHU Loss Minimization with Linear Computation and Linear Convergence
Accept (poster)
Summary: This paper proposes a dual-coordinate descent solver for a class of ERM (Empricial Risk Minimization) problem with general linear inequality constraint, which is of linear convergence rate and efficient coordinate-update cost of O(n) (n is the number of samples). The main contribution is it extends existing method of dual-coordinate descent to handle problem with additional linear inequality constraints. Strengths: The paper has solid theoretical motivation, derivation and analysis, and has presented the idea clearly with solid experiments support on pratical optimization problems. Weaknesses: The paper completely missed discussion on existing works of dual-coordinate ascent on general ERM problem. There is a thread of works such as: Shalev-Shwartz, Shai, and Tong Zhang. "Stochastic dual coordinate ascent methods for regularized loss minimization." Journal of Machine Learning Research 14.1 (2013). on dual-coordinate ascent method for general ERM. The novelty of problem setup proposed in this reviewed paper is it adds an additional linear inequality constraints. The "piecewise-quadratic loss" is simply a special case of "smooth loss" of the above mentioned paper and SAME convergence rate applies to all this type of problem, so the author chould have made this work more general (by extending the formulation of the above work by simply adding linear inequliaty). Technical Quality: 3 good Clarity: 3 good Questions for Authors: To my knowledge, Liblinear is a special case of solver proposed in this paper, speicificaly designed for the SVM optimization problem. Then why could solver proposed in this paper perform more efficiently than Liblinear on SVM problem? Is it implementation issue? or their objective functions are different? The author should have made this clear to avoid confusion. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The paper does not need to limit itself to the case of Relu-Rehu function. Instead, any smooth loss (in the primal) yields strongly-convex dual problem. Therefore can apply same optimization method convergence rate proposed in the paper. (ref: https://www.jmlr.org/papers/volume14/shalev-shwartz13a/shalev-shwartz13a.pdf) Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Weaknesses > The paper completely missed discussion on existing works of dual-coordinate ascent on general ERM problem. There is a thread of works such as: > >> Shalev-Shwartz, Shai, and Tong Zhang. "Stochastic dual coordinate ascent methods for regularized loss minimization." Journal of Machine Learning Research 14.1 (2013). > > on dual-coordinate ascent method for general ERM. The novelty of problem setup proposed in this reviewed paper is it adds an additional linear inequality constraints. The "piecewise-quadratic loss" is simply a special case of "smooth loss" of the above mentioned paper and SAME convergence rate applies to all this type of problem, so the author chould have made this work more general (by extending the formulation of the above work by simply adding linear inequliaty). **Reply**: Thank you for bringing the reference for our attention. We will add the reference into our revision; see our global response for the discussions. In terms of novelty, we have the following important differences compared with the algorithm in the reference. 1. **Convergence rates are different**. In general, the "convex piecewise-quadratic loss" or the proposed ReHLine loss is NOT "smooth loss" but only "Lipschitz loss", thus the results in [SZ13] yielding that a sub-linear convergence (which is suboptimal compared to the linear convergence of Theorem 3 in our manuscript). Although refined analysis for "almost smooth function" is provided in [SZ13], the refined results, even for linear SVM, depend on the assumptions of underlying data distribution (see discussion in Section 5 and the definition of $N(u)$ in Theorem 16 in their paper). 2. **The technical details of the two methods are different**. [SZ13] and ours do share some similarities at a structual level. However, they differ in the details of the iterative steps. Specifically, [SZ13] directly solves the convex conjugate of the loss function, while our method further decomposes this loss function into simpler components for multi-step updates. Intuitively, our approach further leverages the decomposability and linearity of the loss function to perform multiple iterative updates on the decomposed components. 3. **Improve the practical feasibility of convex conjugate**. In fact, in each iteration, solving and implementing the convex conjugate of a general function is not an easy task, especially when the function is a composition of multiple different functions, also see the discussion in [SZ14]: "In general, this optimization problem is still not necessarily simple to solve because $\phi^*$ may also be complex." In comparison, our proposed ReLU-ReHU decomposition offers better implementation capabilities and property (e.g., Proposition 1). 4. As you mentioned, they cannot handle constrained optimization problems. ## Questions > To my knowledge, Liblinear is a special case of solver proposed in this paper, speicificaly designed for the SVM optimization problem. Then why could solver proposed in this paper perform more efficiently than Liblinear on SVM problem? Is it implementation issue? Or their objective functions are different? The author should have made this clear to avoid confusion. **Reply**: Thank you for the comments. It is indeed correct that the algorithm used in LibLinear to solve SVM is a special case of our method. We make this claim in order to highlight the efficiency of our implementation/software, as we regard software as a significant contribution of our work. We will revise the manuscript to clarify the confusion that this advantage is caused by implementation. ## Limitations: > The paper does not need to limit itself to the case of Relu-Rehu function. Instead, any smooth loss (in the primal) yields strongly-convex dual problem. Therefore can apply same optimization method convergence rate proposed in the paper. (ref: https://www.jmlr.org/papers/volume14/shalev-shwartz13a/shalev-shwartz13a.pdf) **Reply**: Thank you for the comments. As we mentioned in the previous point, our optimization objective is not a smooth function, and therefore does not readily yield linear convergence results based on the theoretical results in [SZ13]. In addition to theoretical considerations, we also place great emphasis on the algorithm's computational efficiency. Specifically, in the proposed framework, each coordinate can be solved with a simple analytic solution. This feature has demonstrated excellent efficacy and practical feasibility in our experiments. [SZ13] Shalev-Shwartz and Zhang (2013). Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization. [SZ14] Shalev-Shwartz and Zhang (2014). Accelerated Proximal Stochastic Dual Coordinate Ascent for Regularized Loss Minimization.
Summary: This paper proposes a new optimization algorithm for convex piecewise linear-quadratic objectives with $L_2$ regularization and linear constraints, which achieves the best known iteration complexity simultaneously with the best known per-iteration computational cost, resulting in a smaller total computational complexity than any previous algorithm. Experiments show that the proposed algorithm achieves ~1000x speedup compared to previous solvers and improves over problem-specific solvers despite its generality. Strengths: 1. The considered optimization problem is broad, encompassing training for many linear models (SVM, LAD, SVR, QR) and constraint sets, including fairness. 2. The technique appears to be novel. 3. The comparison of total complexity (Table 2) demonstrates a significant theoretical advantage over previous algorithms. 4. The empirical comparison (Table 5) shows that the proposed algorithm enjoys a significant practical advantage over previous algorithms. Weaknesses: 1. The experimental evaluation is missing a few details that need clarifying. For example, line 212 states that the BenchOpt framework is used "to implement optimization benchmarks for all the SOTA solvers". I find this statement a little unclear. Did the authors implement all of the baseline solvers from scratch? If not, what do they mean by "implement optimization benchmarks"? Also, is it common to implement these solvers in Python? My understanding is that the production solvers are most commonly implemented in C/C++, and using a slower language like Python might amplify the difference in speed between algorithms. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. Line 123 states that the proposed algorithm is inspired by LibLinear, but there is no further discussion of the relationship between ReHLine and LibLinear. What components of ReHLine are inspired by LibLinear, and what components of ReHLine are novel compared to LibLinear? 2. The introduction mentions that this work focuses on the case that the dataset size is much larger than the input dimension and the number of constraints, but this large-scale condition is not discussed further. Why is this large-scale condition significant to your approach, and how does ReHLine compare to baseline algorithms when $n$ is small? 3. Please clarify the questions I posed about the details of experimental evaluation in the Weaknesses section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The authors provided a reasonable discussion of the limitations of their approach. A discussion of potential negative societal impact is, in my opinion, not necessary for this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Weaknesses > The experimental evaluation is missing a few details that need clarifying. For example, line 212 states that the BenchOpt framework is used "to implement optimization benchmarks for all the SOTA solvers". I find this statement a little unclear. Did the authors implement all of the baseline solvers from scratch? If not, what do they mean by "implement optimization benchmarks"? **Reply**: We apologize for the confusion. To clarify, we would like to mention that we do not develop baseline solvers from scratch in our study. Instead, we utilize well-established software for benchmarking purposes. We appreciate your suggestion and will revise the manuscript accordingly in order to clarify any potential misleading description. > Also, is it common to implement these solvers in Python? My understanding is that the production solvers are most commonly implemented in C/C++, and using a slower language like Python might amplify the difference in speed between algorithms. **Reply**: Thank you for the thorough comments. All the methods that we compared, including our own, are fundamentally built upon C/C++ backends. Python just serve as a facilitator by providing an API interface that enhances their usability. See more description of backends in CVXPY and CVXCORE. Python acts as a bridge between the users and the underlying C/C++ backends, enabling a seamless and more user-friendly experience. Therefore, our Python experiments do not significantly impact the execution speed of the software itself. As a result, the experimental results are reliable and reproducible. We will add more descriptions in our manuscript. ## Questions > Line 123 states that the proposed algorithm is inspired by LibLinear, but there is no further discussion of the relationship between ReHLine and LibLinear. What components of ReHLine are inspired by LibLinear, and what components of ReHLine are novel compared to LibLinear? **Reply**: The development of ReHLine was largely inspired by the linear computational complexity and linear convergence property of LibLinear, which contribute to the great success of LibLinear in practice. However, in LibLinear, these properties are developed specifically ONLY for SVMs. One major merit of ReHLine is that these good properties actually apply to a much broader loss function class (with or without constraints), characterized by the convex PLQ functions. Therefore, with ReHLine, the efficiency of LibLinear can be extended to many other ERM problems, and LibLinear can be viewed as a special case of ReHLine. > The introduction mentions that this work focuses on the case that the dataset size is much larger than the input dimension and the number of constraints, but this large-scale condition is not discussed further. Why is this large-scale condition significant to your approach, and how does ReHLine compare to baseline algorithms when $n$ is small? **Reply**: As is shown in Table 1, one major contribution of this article is to reduce the per-iteration cost from $\mathcal{O}(n^2)$ to $\mathcal{O}(n)$, so that ReHLine demonstrates great advantages when $n$ is large and forms the bottleneck of computation. It can be seen that equation (8) is a quadratic function of $(K+nL+nH)$ variables. If $n$ is small, then the scale of the quadratic programming problem (7) is also small, which means that a general-purpose solver may already be sufficient. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I am satisfied with the quality of the paper and your answers cleared a few details up, therefore I will keep my rating the same.
Summary: This paper studies the Empirical Risk Minimization(ERM), which is an important framework of machine learning, and focus on a general regularized ERM based on a convex PLQ loss with linear constraints. According to the authors, the existing algorithms are faced with the problem of slow convergence or high computational cost. The authors leverage the linear property of Karush-Kuhn-Tucker conditions and propose ReHLine, whose total complexity is lower than existing algorithms. ## Comments It is recommended to indicate the unit of running time on Table 5. Strengths: 1. The authors cleverly decompose convex PLQ into a series of ReLU and the so-called rectified Huber units to deal with the problem that the standard definition of convex PLQ is not convenient to optimization algorithms.When solving the decomposed optimization problem, the authors prove that their ReHLine algorithm can achieve linear convergence rate. 2. After comparing with other state-of-the-art algorithms on multiple datasets, the experimental results show that the ReHLine algorithm can solve a variety of domain-specific problems, presenting excellent flexibility, and on large-scale datasets, the algorithm shows excellent acceleration ratio. Weaknesses: Actually, rather than those solvers tested in the experiment part, there are a number of general optimization solvers, including CPLEX, GUROBI, SCIP, etc. Those optimization solvers have exhibited great success in solving linear/nonlinear continuous/discrete optimization problems. Have you compared your proposed method against those general optimization solvers? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see my question in the "Weakness" section. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Weaknesses > Actually, rather than those solvers tested in the experiment part, there are a number of general optimization solvers, including CPLEX, GUROBI, SCIP, etc. Those optimization solvers have exhibited great success in solving linear/nonlinear continuous/discrete optimization problems. Have you compared your proposed method against those general optimization solvers? **Reply**: Thank you for drawing our attention to the availability of more general solvers. Solving Constraint Integer Programs (SCIP) is specifically designed for integer programming, so it does not align well with the nature of our problem. CPLEX and GUROBI are commercial solvers which require licenses to run. At present, we only obtain free licenses for conducting small-scale experiments, and they have failed for running the datasets described in our paper. To illustrate their effectiveness, we run the solvers in simulated datasets with RidgeQR compared with all other solvers (`fail` indicates that we are unable to obtain results under the current licenses). As indicated in the following table, the proposed algorithm continues to exhibit remarkable superiority over CPLEX and GUROBI. | RidgeQR | CPLEX | ReHLine | GUROBI | MOSEK | SCS | ReHLine | |:------|----------:|----------:|----------:|----------:|----------:|----------:| |n=100 | 5.073E-02 | 7.370E-05 | 1.337E-02 | 8.348E-03 | 2.082E-03 | 1.322E-04 | |n=300 | 1.437E-01 | 8.979E-05 | fail | 1.024E-02 | 1.207E-02 | 4.093E-04 | |n=500 | fail | 1.091E-04 | fail | 1.260E-02 | 1.952E-02 | 4.160E-04 | |n=700 | fail | 1.365E-04 | fail | 1.522E-02 | 2.865E-02 | 5.973E-04 | |n=1000 | fail | 1.870E-04 | fail | 1.828E-02 | 2.663E-02 | 1.459E-03 | --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: Thanks for the response! The authors have addressed my comment.
Summary: This paper introduces an algorithm, ReHLine, which has a linear convergence rate on minimizing convex piecewise linear-quadratic loss functions. Experiments on several tasks show great performance gains over existing algorithms. Strengths: General: The paper is clean well-written and easy to follow. Definitions are Proposition are well explained with examples. Pros: * Introduces ReLU-ReHU decomposition and show commonly-used loss functions have their ReLU-ReHU representation. * Finds the dual formulation and shows that proposed algorithm has linear convergence rate by Coordinates Descent. * Experimental results greatly outperform existing algorithms. Cons: * The main contribution of this paper is proposal of ReLU-ReHU decomposition and its resulting formulation. Its convergence result is shown by classical result. I feel the contribution is limited and marginally above the borderline. In all, this paper is well written and the framework ReLU-ReHU is promising. Although the contribution seems limited but I am happy to vote for accept. ---------- Main body checked but did not throughly go through appendix. Weaknesses: See above. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Weaknesses > The main contribution of this paper is proposal of ReLU-ReHU decomposition and its resulting formulation. Its convergence result is shown by classical result. I feel the contribution is limited and marginally above the borderline. **Reply**: Thanks for the comments. We would like to point out that our contribution has different aspects, and the convergence result is only one of them. As can be seen in Table 1, there exist other algorithms that achieve the same convergence rate as ReHLine. However, the total computational cost is affected by both the convergence rate and the per-iteration cost. Making one of the components efficient is possible, but achieving **both** is nontrivial. To make the linear convergence and linear computational complexity **simultaneously hold**, we need a special relation between the primal and dual variables, as is characterized by equation (9), and the objective function needs to have some specific structures to support the convergence rate. One major finding in this article is that the relation (9) is met by a very general loss function class, the convex PLQ functions, and the resulting algorithm has provable linear convergence. To the best of our knowledge, prior to ReHLine, these properties were only studied for SVMs in the LibLinear solver, and now we have greatly expanded the class of models that enjoy the linear convergence and linear computation properties.
Rebuttal 1: Rebuttal: # To All Reviewers Thank you all reviewers for the encouraging and insightful comments. We appreciate the time and effort the reviewers have dedicated to providing valuable feedback on our manuscript. In this round, we have made every effort to address all the comments of the reviewers. **The point-to-point responses are provided in the reply**. Additionally, we would like to take this opportunity to highlight our contributions in software development. We have dedicated a significant amount of time to optimize our C/C++ implementation, Python API, quick routines of various losses and constraints, and provide comprehensive documentation. Our motivation behind these efforts stems from the fact that there are currently not many practical software solutions available. For instance, in simple problems like FairSVM [39,40], the authors opted to use generic but less efficient software such as DCCP for implementation. Given that our software not only operates efficiently but also offers a high level of flexibility and practicality in various applications, we sincerely hope that the reviewers will take into consideration our contribution to software when evaluating this article. As suggested by the reviewers, one point we want to highlight globally is the added discussions on the existing literature. In our revised manuscript, we have included more related works and discussed their connections with ReHLine. Below is an excerpt of the planned revision: > There are multiple existing works developed to tackle ERM problems, such as SAG, SVRG, SAGA, and SDCA. However, SAG, SVRG, and SAGA can only handle smooth loss functions, with an optional non-smooth term that has an easy proximal operator. SDCA applies to general loss functions, but it requests the convex conjugate of loss functions, which is not necessarily simple to compute. Moreover, it only guarantees a sub-linear convergence rate for non-smooth loss functions. In contrast, ReHLine supports all convex PLQ loss functions with optional general linear constraints, which are non-smooth by construction, and they all enjoy a linear computational complexity and provable linear convergence. > > For non-smooth loss functions, another existing method to solve ERM is the smoothing technique (Beck and Teboulle, 2012), which approximates the non-smooth terms by smooth functions, and then uses gradient-based algorithms to solve the smooth problem. However, the choice of the smoothing function and smoothing parameter typically requires additional knowledge, and the linear convergence rate is not always guaranteed. Furthermore, we would like to supplement the following important differences between ReHLine and SDCA. 1. **Convergence rates are different for non-smooth losses**. In general, the PLQ or the proposed ReHLine loss is non-smooth but Lipschitz function, thus the theoretical results of SDCA [SZ13] yield a *sub-linear* convergence (which is suboptimal compared to the linear convergence of Theorem 3 in our manuscript). Although refined analysis for "almost smooth function" is provided in [SZ13], the refined results, even for linear SVM, depend on the assumptions of underlying data distribution (see the discussion in Section 5 and the definition of $N(u)$ in Theorem 16 of [SZ13]). 2. **The technical details of the two methods are different**. [SZ13] and ours do share some similarities at a structual level. However, they differ in the details of the iterative steps. Specifically, [SZ13] directly solves the convex conjugate of the loss function, while our method further decomposes this loss function into simpler components for multi-step updates. Intuitively, our approach further leverages the decomposability and linearity of the loss function to perform multiple iterative updates on the decomposed components. 3. **Improve the practical feasibility of convex conjugate**. In fact, solving and implementing the convex conjugate of a general function is not an easy task, especially when the function is a composition of multiple different functions. In comparison, our proposed ReLU-ReHU decomposition offers better implementation capabilities and properties (e.g., Proposition 1). 4. SDCA cannot handle **constrained** optimization problems. [SZ13] Shalev-Shwartz and Zhang (2013). Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization. Pdf: /pdf/1cd04f00b578806950f3c830f5e02a02df3f850a.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors consider tackling the optimization fo finite sum of piecewise linear quadratic functions (PLQ) arising from e.g. robust empirical risk minimization by means of a reformulation of PLQ functions as sums of ReLU and smoothed ReLU functions combined with a stochastic dual ascent algorithm. The authors start by presenting how any PLQ function can be formulated as sums of of ReLU and smoothed ReLU functions with a table summarizing some common ones. Then the proposed algorithm is presented whose convergence is ensured by previous work. Experimental results advocate for the superiority of the approach compared to old all-purpose solvers in various classification tasks. Strengths: - The generality of the approach is well defended by the flexibility of the parameterization with ReLU and smoothed ReLU functions. - The proposed algorithm captures efficiently the underlying structure of the algorithm. In particular, compared to primal-based approaches (which could easily be implemented using the proximal operators of the losses), the approach can handle constraints. - The proposed algorithm is hyper-parameter free. - The experiments could be a strength, although they are simply misleading in their current state. Weaknesses: - The paper lacks some important related work. In particular, the proposed algorithm appears to simply be a form of stochastic coordinate dual ascent. Such a note would help gain perspective on the contributions of the authors. - A - Numerous algorithms have been developed to tackle empirical risk minimization problems. For example, s-SVM or SVM^2 losses are smooth and could be tackled by fast incremental solvers such as SAGA, SVRG or SDCA or even their accelerated versions such as [1]. see also [2]. Non-smooth losses can also generally easily be smoothed, see e.g. [3], to be amenable theoretically and in practice to fast resolution by fast incremental algorithms. True, most of these algorithms require stepsizes, which make the proposed approach friendlier at first. However, SDCA for example does not require any and could a priori be formulated like the proposed algorithm (using that the proximal operator of the conjugate of the losses is available here). A thorough discussion and comparison to all of the aforementioned algorithms is necessary to held this paper to the claims the authors make. Additional - Non-negativity, box and monotonicity constraints can be handled by simple projections and primal algorithms could also be considered in those cases. - The current approach cannot handle non-strongly convex penalty such as sparsity inducing penalties. - The experimental results and claims are misleading: no one uses all-purpose solvers such as MOSEK. In particular no one uses interior point methods for empirical risk minimization. Claiming 1000 times improvement is overselling the method which in turns is detrimental to the contributions of the paper. Usual comparisons consist in claiming improvements over state of the art method, which appear to hold but at much smaller scale. The proposed algorithm may provide quantitative benefits, but a comprehensive comparison would help the paper. Minor details: - E appears not have been defined in Theorem 2. Similarly recalling what a mode-3 unfolding of a tensor is, would improve the readability of the paper. [1] A. Defazio. A Simple Practical Accelerated Method for Finite Sums. NIPS 2016 [2] S Shalev-Shwartz, T Zhang. Accelerated Mini-Batch Stochastic Dual Coordinate Ascent. NIPS 2013 [3] A Beck. Smoothing and First Order Methods: A Unified Framework. SIOPT 2012 Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Can the authors do comparisons with state of the art algorithms for empirical risk minimization for each of the proposed problems rather than using all-purpose algorithms? A list of potential candidates has been given in the weakness section. - In particular, can the authors discuss and compare empirically their algorithm against SDCA in the absence of constraints? - How can the proposed algorithms handle nonlinear models such as kernel methods? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: - No support for sparsity inducing norms Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Weaknesses > (related work) **Reply**: Thanks for the comments. We emphasize that our contribution is not only applying CD (and its extensions such as SDCA) to a specific optimization problem, but also to understand the class of problems such that the CD variant can be implemented with linear computational complexity and provable linear convergence. In our revised manuscript, we have added more related works and discussed their connections with ReHLine. See our global response. > (s-SVM example) **Reply**: Following your suggestion, we have now added a new benchmark for smoothed SVM (s-SVM) to compare the running times of various algorithms, including SAGA, SAG, SDCA, SVRG, and the proposed ReHLine. In this experiment, the specialized implementations of the algorithms for s-SVM are implemented using the Python library `lightning`. It is worth mentioning that computationally demanding parts in `lightning` are implemented via `Cython` (C-Extensions for Python), so we believe that the comparison is roughly fair. |s-SVM|SAGA|SAG|SDCA|SVRG|ReHLine| |-|-|-|-|-|-| |SPF|3.154E-03|2.462E-03|1.316E-03|3.095E-03|5.298E-04| |philippine|1.292E-01|7.322E-02|2.210E-02|1.325E-01|1.086E-02| |sylva_prior|5.084E-02|4.055E-02|2.371E-02|3.140E-02|1.008E-02| |creditcard|1.146E-01|1.625E-01|1.438E-01|1.164E-01|6.451E-02| |**speed-up**|1.8~11.9x|2.5~6.7x|2.0~2.5x|1.8~12.2x|--| As indicated in the table, even in smooth problems like s-SVM, ReHLine has shown reasonable improvement (or at least demonstrated comparable performance) of these specialized implementations. > (other algorithms for ERM) **Reply**: Thanks for providing these references. As we have shown in the response to the first question, in the planned revision we have added discussions on such related works. Some key points are again summarized below: 1. SAG, SVRG, and SAGA require smooth loss functions, with an optional non-smooth term that has an easy proximal operator. 2. Smoothing methods as in [BT12] do not have a linear convergence guarantee. 3. ReHLine can handle linear constraints, whereas SDCA does not consider such cases. 4. SDCA requests the convex conjugate of the loss function, but computing the convex conjugate of a general convex PLQ function is not straightforward. 5. SDCA is only guaranteed a sub-linear convergence rate for **non-smooth** losses. > (special constraints) **Reply**: Of course, these methods can make use of the special structures of the constraints, but in ReHLine all linear constraints can be handled in a unified way, without sacrificing the computational efficiency and convergence rate. > (sparsity inducing penalties) **Reply**: Thank you for bringing this up. While strong convexity is necessary for our theoretical analysis, sparsity can still be achieved by incorporating elastic net penalties (L1+L2), as described in Proposition 2. Additionally, for L1 penalty, the technique used in Equation (5) of Section 4.2 in [SZ14] can also be applied to ReHLine. > (experimental results) **Reply**: Thank you for the comments. Following your suggestion, we have added a new benchmark to compare our proposed method with other specialized implementations in s-SVM. Please refer to our reply to the previous points for more details. However, we would like to emphasize that there is still a certain usefulness for all-purpose solvers. For example, even in simple problems like FairSVM [39, 40], the authors opted to use an all-purpose solver but less efficient software such as DCCP for implementation. Since our algorithm possesses some attributes similar to all-purpose solvers, including custom losses and constraints, to clarify our experimental conclusions, we would like to present them in two parts: (i) When compared to all-purpose solvers, ReHLine has achieved significant improvements; (ii) In the case of specialized algorithms/implementations, ReHLine can compete with and even outperfom specialized implementations in specific scenarios. > (minor details) **Reply**: Thanks for pointing out. $\mathbf{E}$ stands for a matrix of all ones, and the mode-3 unfolding of a tensor $\mathbf{U}=(u_{ijk})\in\mathbb{R}^{m\times n\times l}$ is the matrix $V=(v_{ks})\in\mathbb{R}^{l\times (mn)}$, where the $k$-th row of $V$ is the vectorization of the $k$-th slice of $\mathbf{U}$, i.e., $V_{k\cdot}=\mathbf{vec}(U_{\cdot\cdot k})^T$. ## Questions > (first question already covered in the Weakness section) > (comparison with SDCA) **Reply**: The differences between ReHLine and SDCA are summarized in the global response. > (kernel methods) **Reply**: Thank you for the insightful comments. Our method can be extended to the following kernel learning problem: $$\min_{\beta}\sum_{i=1}^n L_i(\beta^TK_i)+\frac{1}{2}\|\beta\|_K^2,\quad\text{s.t. }A\beta+b\geq 0,$$ where $K$ is a p.d. kernel matrix, $\beta$ becomes a length-$n$ vector, and the prediction function is now $f(x_j)=\beta^T K_j$. Using Cholesky factorization $K=Q^TQ$, and denoting $\alpha=Q\beta$, it can be rewritten as: $$\min_{\alpha}\sum_{i=1}^n L_i(K^T_i Q^{-1}\alpha)+\frac{1}{2}\|\alpha\|_{2}^2,\quad\text{s.t. }AQ^{-1}\alpha+b\geq 0,$$ which follows the form of (1) in our paper, with $x_i \leftarrow K^T_i Q^{-1}$, $A\leftarrow AQ^{-1}$, thus can be solved by the proposed ReHLine method. However, one major contribution of this article is to reduce the per-iteration cost from $\mathcal{O}(n^2)$ to $\mathcal{O}(nd)$. In the kernel learning $d = n$, yielding the computational complexity re-increase to $\mathcal{O}(n^2)$. In this light, when it comes to solving problems related to kernel learning or high-dimensional problems where d is greater than n, we do not recommend using the ReHLine algorithm. We will add more discussion in the revision. [BT12] Beck and Teboulle (2012). Smoothing and First Order Methods: A Unified Framework. [SZ14] Shalev-Shwartz and Zhang (2014). Accelerated Proximal Stochastic Dual Coordinate Ascent for Regularized Loss Minimization. --- Rebuttal Comment 1.1: Title: Thanks for the answer Comment: I thank the authors for their detailed answers. I appreciate the additional comparisons. I believe the paper will highly benefit from such nuanced comparisons. Highlighting the versatility of the approach while remaining competitive in comparison to previous dedicated algorithms is, I believe, a better way to present this work. I increased my score in light of the more detailed discussion of previous methods. --- Reply to Comment 1.1.1: Comment: Thank you for the insightful comments. We are grateful for the increased rating to our paper. We will further revise our manuscript based on your suggestions.
Summary: This paper introduced a new function class called composite ReLU-ReHU which is shown to be equivalent to the class of convex PLQ functions. Based on the ReLU-ReHU decomposition of convex PLQ functions, the authors then formulated a new box-constrained quadratic programming optimization problem, named ReHLine optimization which can be solved efficiently by the ReHLine algorithm. The proposed solver has a provable linear convergence rate and a linear per-iteration computation complexity. Benchmarking on various tasks and datasets, the ReHLine showed significant improvement over both generic and specific solvers, especially on large-scale datasets. Strengths: - This paper is well-written and easy to follow. - To the best of my knowledge, the composite ReLU-ReHU functions and the ReHLine algorithm are novel. - The effectiveness of the proposed method is adequately backed by both theoretical results and empirical results. - The ReHLine algorithm successfully tackles any convex PLQ loss functions, thus it shows great potential for many machine learning tasks. Weaknesses: - While authors claimed that the simplification of Canonical CD updates is highly non-trivial, it seems to be a straight application of [18] to the ReHLine optimization problem. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Are the experimental results using Algorithm 1 or Algorithm 2 in the Appendix? Could the authors provide the running time of both algorithms for comparison? 2. Could the authors provide an ablation study to demonstrate the objective gaps of different solvers when increasing the number of iterations? Minor: - There is a typo in Proposition 2: $T \leftarrow \begin{pmatrix} \sqrt{\frac{2}{\lambda_2}} \boldsymbol{T} & \boldsymbol{0}_d^T \end{pmatrix}$. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have discussed the limitations of their work in Section 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Weaknesses > While authors claimed that the simplification of Canonical CD updates is highly non-trivial, it seems to be a straight application of [18] to the ReHLine optimization problem. **Reply**: Thanks for the comments. When the ReHLine optimization problem (1) and (3) is given, the paper seems to be a direct application of LibLinear [18]. However, the main contribution of our paper is actually the induction/construction of a class of problems that allows for the generalization of the LibLinear [18] algorithm, achieving both linear convergence and linear computational complexity. Specifically, we would like to point out that whether the canonical CD can be simplified largely depends on whether the special relationship between the primal and dual variables as in equation (9) exists, and meanwhile we request the algorithm to have linear convergence. [18] is an important reference as it discovers this relationship in SVMs and proves its linear convergence. However, to the best of our knowledge, prior to ReHLine it is unknown whether these two properties hold for other more general cases. In this article, we have greatly expanded the class of models that possess these properties. In particular, we show that they are available not only for the hinge loss as in SVMs, but also for all convex PLQ functions. We think that this generalization is novel and non-trivial. ## Questions > Are the experimental results using Algorithm 1 or Algorithm 2 in the Appendix? Could the authors provide the running time of both algorithms for comparison? **Reply**: Thank you for the comments. We are using Algorithm 2 (ReHLine solver with shrinking) for the experiments. Following your suggestion, we additionally conducted experiments to compare the ReHLine with and without shrinkage in the following tables. The experimental results suggest that our algorithm's superiority (compared to other methods) is not influenced by shrinkage. In fact, in most cases, ReHLine without shrinkage may even outperform the one with shrinkage. | FairSVM | ReHLine(shrink=False) | ReHLine(shrink=True) | |:-----------|----------:|----------:| |SPF | 3.951E-04 | 4.130E-04 | |philippine | 1.186E-01 | 1.602E-02 | |sylva_prior | 9.234E-03 | 1.010E-02 | |creditcard | 3.318E-01 | 2.157E-01 | | ElasticQR | ReHLine(shrink=False) | ReHLine(shrink=True) | |:---------------|----------:|----------:| |liver-disorders | 8.111E-05 | 2.165E-04 | |kin8nm | 6.109E-04 | 3.491E-03 | |house_8L | 1.618E-03 | 9.481E-03 | |topo_2_1 | 8.242E-03 | 4.549E-02 | |BT | 1.454E-01 | 2.918E+00 | | SVM | ReHLine(shrink=False) | ReHLine(shrink=True) | |:-----------|----------:|----------:| |SPF | 8.351E-04 | 3.968E-04 | |philippine | 1.193E-01 | 1.083E-02 | |sylva_prior | 4.841E-03 | 5.709E-03 | |creditcard | 2.973E-02 | 5.842E-02 | | ElasticHuber | ReHLine(shrink=False) | ReHLine(shrink=True) | |:---------------|----------:|----------:| |liver-disorders | 1.121E-04 | 1.210E-04 | |kin8nm | 8.201E-04 | 2.176E-03 | |house_8L | 7.009E-04 | 1.004E-03 | |topo_2_1 | 1.025E-1 | 2.084E-02 | |BT | 1.737E-01 | 4.263E-01 | > Could the authors provide an ablation study to demonstrate the objective gaps of different solvers when increasing the number of iterations? **Reply**: Following your suggestion, we have included the figures depicting the optimization progress, specifically the objective function value over time or step, for all the benchmarks. Kindly refer to the attached PDF document for a comprehensive overview. > Minor: There is a typo in Proposition 2: $\mathbf{T} \leftarrow\left(\begin{array}{cc}\sqrt{\frac{2}{\lambda_2}} \mathbf{T} & \mathbf{0}^\intercal_d\end{array}\right)$ **Reply**: Thanks for pointing out the typo. We will fix it in the revision. --- Rebuttal Comment 1.1: Title: Respond to Authors Comment: I thank the authors for their response. After reading their rebuttal, I appreciate that the authors have addressed all of my concerns.
null
null
null
null
DisDiff: Unsupervised Disentanglement of Diffusion Probabilistic Models
Accept (poster)
Summary: Imprecisions - “According to the completeness requirement” add reference - line 75-76 PADE-> PDAE - line 270-271. Typo: compare with disco. dissdiff -> compared with disco, disdiff Strengths - Achieving disentangled representations in diffusion models is an interesting and useful problem. Weaknesses - Missing derivation of reverse diffusion process, Eq 5, 6, 7. - What is the difference between the proposed model and a latent diffusion model? - Diff-AE and PDAE should also be considered as baselines in experiments. - Image generation experiments against Diff-AE and PDAE should be conducted in addition to disentanglement in order to add robustness to the experiments. - Unclear how Figure 3 is generated. Could you describe the process in detail and explain what you mean by “swapping the representation”. Strengths: see summary Weaknesses: see summary Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: see summary Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: see summary Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments and suggestions. We appreciate your feedback and have made several changes to address your concerns. Please find below our responses to each point you raised. We have thoroughly revised Section 4 of our paper. We have made significant changes in the presentation and organization of the section, ensuring that our arguments are clearly articulated and well-supported. We highly encourage the reviewers to revisit our revised Section 4, as we are confident that the improvements made will provide a clearer understanding of our research. We appreciate the valuable feedback provided by the reviewers and look forward to receiving further comments that will help us refine and strengthen our paper. - Imprecisions: We have revised the manuscript according to your suggestions and corrected the imprecisions. We have the typos in the main paper. - Derivation of reverse diffusion process (Eq. 5, 6, 7 in our submission): Based on your feedback, we have now included the derivation of these equations.Please refer to comment “Section 4”. Eq.5 is taken from [3] (Please see Equation 5 in it). Eq. 6 is taken from the classifier guidance trick proposed in [1] (Please see Equation 14 in it), the role of it is to build the score function of conditional distribution. Eq. 7 is taken from the reverse problem proposed in [2] (Please see Equation 10 in it), the role of it is to recover the data sample with the score function. We believe that this addition helps improve the clarity and completeness of our presentation. - Difference between the proposed model and a latent diffusion model: The key difference between our proposed model and a latent diffusion model is that the latter is a general DPM without disentangled representations and the ability to sample according to independent factors. Our method is designed to disentangle general DPMs, providing them with these capabilities. We have clarified this distinction in our revised manuscript. - Additional baselines and experiments: We appreciate your suggestion to include Diff-AE and PDAE as baselines in our experiments. We will add these baselines and conducted image generation experiments against them in addition to disentanglement in discussion stage. - Clarification on Figure 3: We apologize for any confusion regarding the generation of Figure 3. In our revised manuscript, we have provided a detailed explanation of the process. To "swap the representation," we first encode the images to obtain their representations. We then exchange the representations corresponding to the same factor between two different images. Finally, we decode the swapped representations to generate new images. We believe that addressing your concerns has significantly improved the quality and clarity of our paper. We appreciate your constructive feedback. [1] Diffusion Models Beat GANs on Image Synthesis. [2] Improving Diffusion Models for Inverse Problems using Manifold Constraints. NeurIPS 2022 [3] Unsupervised Representation Learning from Pre-trained Diffusion Probabilistic Models. NeurIPS 2022 --- Rebuttal Comment 1.1: Comment: Thanks for addressing my comments, I updated my score. --- Reply to Comment 1.1.1: Title: Response Comment: Dear Reviewer kfVZ, thank you for the improved rating. We appreciate your valuable feedback and will continue to enhance our work. Best regards
Summary: This paper proposes to disentangle a pre-trained diffusion probabilistic model (DPM) in an unsupervised way to improve interpretability. The author designed two constraints, invariant condition and variant condition, to guide the disentanglement. The proposed method was evaluated on three synthetic datasets and CelebA, a facial image dataset. Strengths: - The combination of disentangled representation learning and diffusion-based models is an important and challenging problem. - Judging solely by the experimental results, the proposed method works to some extent on synthetic datasets. Weaknesses: - Due to unclear definitions and poor mathematical formulation, I was unable to completely understand the proposed method. - The author claimed that "the conditional independence of the representations is a necessary condition for disentanglement". However, it is not sufficient. Disentanglement is more than the independence of representations. Even the definition of disentanglement used in this paper is unclear. "For the first time achieving disentangled representation learning in the framework of DPMs" is overclaiming. - The author's claims are a bit contradictory. The author claimed that "disentangling a DPM can improve the interpretability of DPM" but also admitted that "Our method is completely unsupervised, and without any guidance, the learned disentangled representations on natural image sets may not be easily interpretable by humans." It seems that the author had a goal but failed to achieve it. Is it realistic to consider completely unsupervised disentanglement? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - What is the mathematical definition of _disentangled conditional sub-gradient fields_? What does "The data distribution $p(x)$ can be **disentangled** into $N$ independent distributions $\\{p(x \mid f^k) \mid k = 1, \dots, N \\}$" mean? - What does "the data sample space is not **well-organized**" mean? - Cross entropy compares probability distributions. However, Eq. (10) uses a "distance vector" $d$ and "index" $c$. Eq. (12) even uses a difference of distance vectors. The math here is too questionable to make this paper credible. Please revise this part. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Section 6 discussed the limitations of unsupervised disentanglement and diffusion-based methods. "The potential negative societal impacts are malicious uses." is too general and ambiguous. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments and suggestions. We appreciate your feedback and have made several changes to address your concerns. Please find below our responses to each point you raised. We have thoroughly revised Section 4 of our paper. We have made significant changes in the presentation and organization of the section, ensuring that our arguments are clearly articulated and well-supported. We highly encourage the reviewers to revisit our revised Section 4, as we are confident that the improvements made will provide a clearer understanding of our research. We appreciate the valuable feedback provided by the reviewers and look forward to receiving further comments that will help us refine and strengthen our paper. - Overclaim: We apologize for any overclaiming in our previous manuscript. We have now revised the manuscript and properly toned down our claims accordingly. - Definitions and mathematical formulation: We have carefully revised the manuscript to provide clearer definitions and improved mathematical formulations. Please refer to comment “Section 4”.We believe that these changes have significantly improved the clarity of our paper and helped address your concerns regarding the understanding of the proposed method. - Necessary condition for disentanglement: We agree that the conditional independence of the representations is a necessary, but not sufficient, condition for disentanglement. We have revised the manuscript to clarify this point and provide a more accurate statement: "We follow prior works in disentangle representation literature that propose a necessary condition that works in practice for disentanglement. For example, the group constraint in [43], the maximization of mutual information in [23, 41]. We propose to minimize the mutual information between as a necessary condition." - Contradictory claims: We apologize for any confusion. We want to demonstrate that:“ Due to the fact that disentangled representation learning on real-world data is quite challenging. While our method demonstrates effectiveness on synthetic datasets, we find that the learned disentangled representations on natural image datasets may not be easily interpretable by humans. We infer that the possible reasons behind it are the requirement of reconstruction and the weak ability of pretrained DPM. Therefore, we have also included additional results with a more powerful DPM in the appendix that demonstrate the effectiveness of our method for disentangled generation on real-world data. This shows that our method can be beneficial for disentangled generation, We have now revised the manuscript properly and accordingly. Questions: - We apologize for any confusion caused by definitions and mathematical formulation. We have carefully revised the manuscript to provide clearer definitions and improved mathematical formulations. Please refer to the comment “Section 4”. We highly recommend you read that part first, we will clarify the questions proposed. - “disentangled conditional sub-gradient fields”: We assume that the dataset is generated by N factors, $D=\lbrace x_0|x_0\sim p(x)\rbrace$, there is a one-one mapping: $h: (f^1,f^2,\dots,f^N)\rightarrow x_0$, there is a set of conditional distribution $\lbrace p(x|f^c)|c = 1,\dots,N\rbrace$. The disentanglement of DPM is to learn the score function $\nabla_{x_t} \log p(f^c|x_t)$ so that we can sampling from conditional distributions $p(x_t|f^c)$. - ”the data sample space is not well-organized“ We mean that the data space is complex, it is difficult to sampling from $p(x|f^c)$, when we have the score function $\nabla_{x_t} \log p(x_t)$, by this we want to emphasize the difficulty of the problem. - “Cross entropy”: Please note that the cross entropy here we use is the cross entropy loss function in classification problems. The inputs of this function are “logits” and “labels”(GT). In our setting, we regard the distance as logits and indexes as labels. As comment “Section 4” shows, we use this cross entropy loss as a tool for implementation of minimizing the distance in the Proposition. Since cross entropy loss is a bounded function, the usage of such a loss function has the benefit for training stability. We appreciate your constructive feedback, which has helped us improve the quality and clarity of our paper. --- Rebuttal Comment 1.1: Comment: Thank you for your answers. I appreciate the improved mathematical notation. I'm now slightly more positive about this work, but I still need some time to check the details. A small question: by "one-one mapping", do you mean injection or bijection? Either way, it may be true for synthetic data like 3D Shapes, but it is hardly true for real data because of unannotated factors: there could be multiple examples with the same set of factors. How did you deal with it? (btw, you used $h: x_0 \mapsto (f^1, \dots, f^N)$ in the paper but $h: (f^1, \dots, f^N) \mapsto x_0$ in the rebuttal. Which is correct? Please double-check.) PS: I don't think "cross entropy loss function" is a widely accepted and uniquely defined concept (yes I know how `torch.nn.CrossEntropyLoss` is implemented). When we talk about cross-entropy, it's always defined for probability distributions. Please specify and justify the probabilistic model you want to use. --- Reply to Comment 1.1.1: Title: Response Comment: Dear Reviewer ZKsn, we would like to thank you for follow-up and increasing the score. We are more than happy for the reviewers to verify the details. We appreciate your valuable feedback and will continue to enhance our work. 1.Regarding the first question - We appreciate the reviewer raising this issue. We apologize for our statement not being rigorous enough, as it is tailored to artificial datasets like Shapes3D. What we meant by mapping $h$ to express here is the dependency of $x_0$ on each factor $f^c$, as shown in the probability graph model of Figure 1 (a). The sampled $f^c$ and $x_0$ actually only have a conditional probability dependency, $p(x_0|f^1...f^N)$ (for Shapes3D dataset, $p(x_0|f^1...f^N) = 1$). As we can see from the comment "section 4", we only used this conditional probability (PGM) as our assumption, not the one-one mapping (bijection). Therefore, our method is also effective for general datasets, e.g., CelebA. - We will remove the mapping $h$ and modify the sentence to "The data $x_0$ depends on each factor $f^c$, indicating the presence of dependency relationships in the probability graphical model, i.e., $p(x_0|f^1...f^N)$, as illustrated in Figure 1(a)." 2.Regarding the second question We agree with the reviewer that cross-entropy should be between two distributions, and the probability distribution used should be specified. We would like to change our previous writing to: - For Invariant Loss, we use the softmax of the representation distances as the probability $q_I(i=k)$, which indicates the probability of the largest change of representation occurring in unit $k$ after encoding the conditioned sampled images. The largest change should occur in the $c$-th unit, so for the target distribution $p_I$: the probability of the $c$-th unit having the largest change is 1, and for all other units it is 0, i.e., $p_I(i=c) = 1, p_I(i=k)=0, k \neq c$. - For Variant Loss, we use the softmax of the difference between two representation distances as the probability $q_V(i=k)$, which indicates the probability that unit $k$ has the smallest representation distance (consistent with the input image) after encoding the conditioned sampled images. Similarly, we expect the $c$-th unit of representation to be the most consistent. Therefore, for the target probability $p_V$: the probability of the $c$-th unit being the most consistent is 1, while for all other units, it is 0, i.e., $p_V(i=c) = 1, p_V(i=k)=0, k \neq c$.
Summary: This paper proposes a new task of unsupervised disentanglement of diffusion probabilistic models (DPMs) and presents an approach named DisDiff to achieve disentangled representation learning in the framework of DPMs. The authors connect disentangled representation learning to DPMs to take advantage of the remarkable modeling ability of DPMs. DisDiff learns a disentangled representation of the input image in the diffusion process and for each factor, DisDiff learns a disentangled gradient field, which brings new properties for disentanglement literature. The proposed method is evaluated on several real-world datasets, and the results show that DisDiff outperforms state-of-the-art methods in terms of disentanglement quality and interpretability. The paper concludes by discussing potential future directions for applying DisDiff to more general conditioned DPMs and pre-trained conditional DPMs. Strengths: Strengths: - Performance: The proposed method is evaluated on several real-world datasets, and the results show that DisDiff outperforms state-of-the-art methods in terms of disentanglement quality and interpretability. - Clarity: The writing is good overall and the proposed idea in this paper is easy to follow. The authors provide good explanations of the method and the experiments. - Significance: The proposed method has potential applications in various fields, such as image editing, controllable generation, etc. - References: The paper provides comprehensive references. Weaknesses: Weaknesses: - Overclaim: this paper claims that they are the very first work introducing disentanglement tasks for diffusion probabilistic models (DPM). However, a very recent work [1] in ICML 2023 has introduced and studied this problem. I understand that there might be some timeline issue that makes the authors not aware of this work. But it would be better if the authors could properly cite and discuss this work and revise their claims in the corresponding paragraphs. - Abuse of notations: there are some wrong notations (might be typos) and abuse of notations, especially in Sec. 4.3. For example, in line 179, it should be $E_\phi^k(\hat{x}_0^c)$ and $E_\phi^k(\hat{x}_0)$. In lines 180 - 181, the conditioned representation should be $E_\phi^c(\hat{x}_0^c)$ and the claim should be that the unconditioned one is closer to $E_\phi^c(\hat{x}_0)$ than the conditioned one. I suggest the authors carefully examine their statements instead of letting the reviewers guess the meaning. - More baselines: I appreciate that the authors include some of the most important baselines of VAEs and GANs along the line of disentanglement research. However, some more recent diffusion baselines could also be included, e.g., PADE [2] and DiffAE [3] (I understand [1] would be too new to have the open-source codebase to evaluate). - More metrics for CelebA: there is a tailored quantitative metric [4] for evaluating disentanglement performance on CelebA. It would be better to have the numbers in the tables. - In line 176, fulfil -> fulfill [1] Wang, Yingheng, et al. "InfoDiffusion: Representation Learning Using Information Maximizing Diffusion Models." arXiv preprint arXiv:2306.08757 (2023). [2] Zhang, Zijian, Zhou Zhao, and Zhijie Lin. "Unsupervised representation learning from pre-trained diffusion probabilistic models." Advances in Neural Information Processing Systems 35 (2022): 22117-22130. [3] Preechakul, Konpat, et al. "Diffusion autoencoders: Toward a meaningful and decodable representation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. [4] Yeats, Eric, et al. "Nashae: Disentangling representations through adversarial covariance minimization." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Distortion in generated images: although the quantitative metrics (e.g. DCI) are pretty promising, the qualitative results on CelebA still seem a bit distorted with disentangled latent variables. Would there be some training tricks that can be applied to alleviate this issue? Are there any other explanations? - In line 159, the authors claim that Eq. (7) is derived from Eq. (6) by using Tweedie's formula, however, in my opinion, this is just a straightforward application of reparameterization trick. Can the authors provide the full derivation of the methodology part? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: There is a section of discussion for limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments and suggestions and the encouraging words. We appreciate the time and effort you took to review our paper. Please find below our responses to your concerns and the changes we have made to address them. We have thoroughly revised Section 4 of our paper. We have made significant changes in the presentation and organization of the section, ensuring that our arguments are clearly articulated and well-supported. We highly encourage the reviewers to revisit our revised Section 4, as we are confident that the improvements made will provide a clearer understanding of our research. We appreciate the valuable feedback provided by the reviewers and look forward to receiving further comments that will help us refine and strengthen our paper. - Overclaim: We apologize for the oversight in not citing the recent work [1] in ICML 2023. We have now added the reference to this work and revised our claims accordingly in the corresponding paragraphs. We agree that there might have been a timeline issue that led to our unawareness of this work, and we appreciate your understanding. We also acknowledge that our work and [1] can be considered as concurrent works. and we will make this clear in our revised manuscript. - Abuse of notations: We apologize for any confusion caused by the notational errors and inconsistencies in our manuscript. We rewrite section 4.3 for better clarity and have carefully revised and corrected these notations. Please refer to the comment “Section 4”. We appreciate your detailed feedback and suggestions, which have helped us improve the clarity of our presentation. - More baselines: We agree that including more recent diffusion baselines such as PADE and DiffAE would strengthen our evaluation. We will include these baselines in our experiments and update the results in the paper in the Discussion stage. - More metrics for CelebA: We appreciate your suggestion to include the tailored quantitative metric [4] for evaluating disentanglement performance on CelebA. We will add this metric in the Discussion stage. For questions: - We propose a modified framework for disentangling the DPM of real world data in Appendix G, I and present the results trained on CelebA HQ and FFHQ. We infer that one possible explanation for the observed distortion is due to the requirement of reconstruction and the weak ability of pretrained DPM. The disentanglement process might not be so perfectly captured due to the challenge of the complex real world dataset. Another factor could be the choice of loss functions, which might not be optimal for achieving a perfect balance between disentanglement and image quality. We leave it for future work. - We have modified the draft in Section 4 for clarity. Eq.6 in our submission is taken from the classifier guidance trick proposed in [1] (Please see equation 14 in it), the role of it is to build the score function of conditional distribution. Eq.7 in our submission is taken from the reverse problem proposed in [2] (Please See Equation 10 in it), the role of it is to recover the data sample with the score function. We want to use these equations to sample the recovered samples based on representations. [1] Diffusion Models Beat GANs on Image Synthesis. [2] Improving Diffusion Models for Inverse Problems using Manifold Constraints. NeurIPS 2022 --- Rebuttal Comment 1.1: Comment: Thank you for addressing my comments! The experiments you added look promising. It seems this work achieves considerable improvements on disentanglement tasks. However, I still have some concerns regarding Section 4. - There're still some notations that have not been used properly, e.g. $\mapsto$, which people usually use it to depict how elements from the domain are transformed by the function, not how the entire domain is mapped to a single element in the codomain. There should be clear explanations. - Typos like "follow ..." which should be "following". (Although I appreciate the technical contributions in this paper and the efforts that the authors put into the rebuttal, the paper still needs carefully proofreading and more polishing as a publication.) - Regrading how the authors arrive at the final disentangling loss, I am still a bit confused. It seems the authors introduced some new notations in proposition 1 and the proof sketch (the proof sketch is still not that clear to me though). I think I understood the basic idea of minimizing mutual information. But I am still curious about the full derivation. Could you please provide it? --- Reply to Comment 1.1.1: Title: Response Comment: Dear Reviewer 6mFa, Thank you for your valuable comments and for acknowledging the improvements in our work. We genuinely appreciate your feedback. Please find our responses to your concerns below: Regarding the notations in Section 4: - We regret that our writing makes you misunderstand the notation here. The notation here is exactly to depict how elements from the domain are transformed. The input of the function $h$ is an element $(f_1,\dots,f_N)$, which is a vector in $R^N$ space, **NOT** the factor set $\mathcal{C}$. In addition, as shown in our response for Reviewer ZKsn, we will delete the function $h$ and $\mapsto$. We will ensure that the notations are used properly and clearly. We will make the necessary modifications and ensure that the entire domain is mapped correctly in our updated version. - Typos and proofreading: We appreciate your attention to details. We actually want to use an imperative sentence here. We will thoroughly proofread the paper to eliminate typos and improve the overall clarity. We are committed to presenting a well-polished publication. Final disentanglement loss "derivation": - We appreciate your attention on the proof sketch. Our target of providing proposition 1 is to demonstrate a better understanding of the losses proposed. The loss function and method is derived from an intuitive way as explained in our paper and can not be derived directly from the proposition. Please note that we replace the original notation with new notations for better understanding and clarity rather than introducing new variables in comment “Section 4”, e.g., $\hat{z}^{k|c} = E^k_\phi(\hat{x}^c_0)$. - We understand that only a proof sketch may not be sufficiently clear. Due to space constraints and NeurIPS policies (we can not use any url in this page or update our paper now), we present an explanation of the proof sketch and its main ideas, which should help to clarify our proof. If it is possible, we are more than happy for the reviewers to verify the details. - We first decompose the mutual information into two terms, entropy and conditional entropy, following the approach in [1,3,4]. Since the encoder is fixed for the disentangling loss, the entropy remains constant. We then introduce a distribution $q$ to divide the conditional entropy into two components: the expectation of $\log q$ and the KL divergence between $p$ and $q$, as also demonstrated in [1,4]. For the first term, considering the PGM in Figure 1(a), we can establish two lemmas to involve $x_0$ in the expectation and resample $\hat{z}^{c|c}$ conditioned on $x_0$ to replace the original $z_c$ in $q$. A similar method to Lemma 1 in [1] can be applied to prove this. Subsequently, we assume $q$ follows a Gaussian distribution and derive the analytical form of the first term. Finally, we can utilize the triangle inequality to obtain the distance of representations as an upper bound. In the second term, we assume that the $p$ distribution follows a Gaussian distribution as well (please note that an assumption on $p$ is necessary to derive an upper bound for MI, as demonstrated in [2,3,4]). By deriving the upper bound of this term, the KL divergence can be expressed as the distance between the means, which corresponds to the distance between the representations. PS: We can also regard the first term and the last term in Eq (6) of "Section 4" as the negative pairs and the middle term in Eq (6) of "Section 4" as positive pairs in [2], which implies there may be more than one method to provide the proof. [1] InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets NIPS 2016. [2] CLUB: A Contrastive Log-ratio Upper Bound of Mutual Information ICML 2020. [3] The im algorithm: A variational approach to information maximization NIPS 2003. [4] Deep Variational Bottleneck ICLR 2017. --- Rebuttal 2: Comment: Dear Reviewer 6mFa, thanks a lot for your significant discussion of this work and your community service! The authors have now added significant additional material as a response and I wanted to ask if you are satisfied with their answer given that the discussion period is soon coming to an end. All the best, Your AC. --- Rebuttal Comment 2.1: Comment: Dear Authors, Thank you for providing this detailed proof promptly! I understand the discussion deadline is coming soon. So I will only raise two questions regarding the proof. - In Proposition 4, the second case seems to state that $I_\theta$ will be a lower bound instead of an upper bound. - How does the "triangular" objective in the proof sketch arrive according to the propositions you give? --- Reply to Comment 2.1.1: Title: Response Comment: Dear Reviewer 6mFa, Thank you for your insightful questions. We would like to address your concerns as follows: - Regarding Proposition 4, the second case of proposition 4 demonstrates that our estimator can serve as an estimator $I_\theta$ with an absolute error bounded by a small positive number $\epsilon$. Recall that our goal is to optimize mutual information. It should be more beneficial to optimize an estimated mutual information rather than just its upper bound. Therefore, when our estimator serves its purpose, whether it acts as an upper or lower bound becomes less critical. We appreciate your feedback on this point and will update our comments and the manuscript to make this point clearer in the updated version. - As for the "triangular" objective in the proof sketch, we have already optimized the original proof to obtain a more clear and simplified one. We are now providing an updated version without it which can provide more clear explanations for our method. We hope that these clarifications address your concerns and help you better understand our work. Please let us know if you have any further questions or require additional information.
null
null
Rebuttal 1: Rebuttal: # Section 4 We assume that a dataset $\mathcal{D} = \lbrace x_0\sim p(x_0)\rbrace$ is generated by $N$ underlying ground truth factors $\mathcal{C} =\lbrace f^c|c=1,\dots,N\rbrace$, where $p(x_0)$ is the data distribution, i.e., each sample is generated by the underlining factors, $h: (f^1, \dots, f^N) \mapsto x_0, \forall x_0 \in \mathcal{D}$ and each factor $f^c \sim p(f^c)$, where $p(f^c)$ is the distribution of factor $c$. The conditional distributions of the factors are $\lbrace p(x_0|f^c)|c=1,\dots,N\ \rbrace$, each of which can be shown as a curved surface in Fig. A (b). The relation between the conditional distribution and data distribution can be formulated as: $$ p(x_0) = \int p(x_0|f^c)p(f^c)d f^c, (1) $$ Note that $f^1,\dots,f^N$ and $x_0$ form a v-structure in PGM, as shown in Fig. A (a). A DPM learns a model $\epsilon_\theta(x_t,t)$ to predict the noise added to a sample $x_t$, then this can be used to derive a score function: $\nabla_{x_t}\log p(x_t) = {1/ \sqrt{\alpha_t - 1}}\cdot\epsilon_\theta(x_t,t)$. Follow [6] to use Bayes Rule, we can write the score function of the conditional distribution as: $$ \nabla_{x_t}\log p(x_t|f^c) = \nabla_{x_t}\log p(f^c|x_t) + \nabla_{x_t}\log p(x_t), (2) $$ By using a score function of $p(x_t|f^c)$, we can sampling data conditioned on the factor $f^c$. In addition, Eq. 2 can be extended to one conditioned on $\mathcal{S} \subseteq \mathcal{C}$ by replacing $f^c$ with $S$. The goal of disentangling a DPM is to model $\nabla_{x_t} \log p(x_t|f^c)$ for each factor $c$. Base on Eq. 2, we can learn $\nabla_{x_t}\log p(f^c|x_t)$ for a pre-trained DPM. However, $f^c$ is unknown in an unsupervised setting. Fortunately, we can learn a set of disentangled representations $\lbrace z^c| c=1,\dots,N\brace$. There are two requirements for these representations: $(i)$ They include all information of $x_0$, named as *completeness*, and $(ii)$ there exists one-one mappings: $z^c \mapsto f^c$ for each factor $c$, named as *disentanglement*. With these representations, we can use $\nabla_{x_t}\log p(z^c|x_t)$ to approximate $\nabla_{x_t}\log p(f^c|x_t)$. Our target is to disentangle a pre-trained unconditional DPM on dataset $\mathcal{D}$: $p_\theta(x_{t-1}|x_t) = \mathcal{N} (x_{t-1};\mu_\theta(x_t, t), \sigma_t)$ in an unsupervised manner. Specifically, given $x_0\in\mathcal{D}$, for each factor $c\in\mathcal{C}$, we can learn the disentangled representation $z^c$ via an encoder $E_\phi$, and the corresponding disentangled gradient field $\nabla_{x_t} \log p(z^c|x_t)$. Since random variables $z^1,\dots,z^N$ and $x_0$ forms a common cause structure, $z^1,\dots,z^N$ are independent when conditioned on $x_0$. We prove that this is also hold for $x_t$. We thus formulate the gradient field $\nabla_{x_t} \log p(z^S|x_t)$, $z^{\mathcal{S}} = \lbrace z^c|f^c \in \mathcal{S}\rbrace$ conditioned on subset $\mathcal{S} \subseteq \mathcal{C}$ as $$ \nabla_{x_t} \log p(z^S|x_t) = \sum_{f^c\in S}\nabla_{x_t} \log p(z^c|x_t). (3) $$ We follow [6,41], to model the conditional reverse process as a Gaussian distribution $p_\theta(x_{t-1}|x_t, z^{\mathcal{S}})$. Together with Eq. 3, the distribution has a shifted mean given by: $$ \mathcal{N} (x_{t-1}; \mu_\theta(x_t, t) + \sigma_t\sum_{f^c\in \mathcal{S}}\nabla_{x_t} \log p(z^c|x_t), \sigma_t). (4) $$ Directly using $\nabla_{x_t} \log p(z^c|x_t)$ complicates the diffusion model training pipeline [14]. We thus use a decoder $G_\psi(x_t, z^c, t), f^c\in \mathcal{S}$ to estimate the gradient fields $\nabla_{x_t} \log p(z^\mathcal{C}|x_t)$. To achieve this goal, we first use $\sum_{f^c\in \mathcal{C}}G_\psi(x_t,z^c,t)$ to estimate $\nabla_{x_t} \log p(z^\mathcal{S}|x_t)$ based on Eq.3. Therefore, we adopt the loss inPDAE [41] but using summation of $G_\psi(x_t,z^c,t)$ instead as $$ L_r = \mathop{\mathbb{E}}\limits_{x_0,t,\epsilon}\|\epsilon - \epsilon_\theta(x_t,t) + \frac{\sqrt{\alpha_t}\sqrt{1-\bar \alpha_t}}{\beta_t}\sigma_t\sum_{f^c\in \mathcal{C}}G_\psi(x_t,z^c,t)\|. (5) $$ The above equation can result in that $x_0$ can be reconstructed using all disentangled representations $E_\phi(x_0)$, which is exactly the *completeness* requirement of disentanglement. Secondly, to meet the *disentanglement* requirement, each $G_\psi(x_t,z^c,t)$ should approximate $\nabla_{x_t} \log p(z^c|x_t)$, individually. With a learnt $G_\psi(x_t,z^c,t)$, we can sample $x^c_0$ conditioned on the disentangled representation $z^c$. We introduce the Disentangling Loss for the *disentanglement*, and to constrain that $G_\psi(x_t,z^c,t)$ is an estimation of $\nabla_{x_t} \log p(z^c|x_t)$. Since it is hard to find the sufficient condition for disentanglement in unsupervised setting, we follow prior works that propose necessary conditions worked in practice, e.g., the group constraint in [40], the maximization of mutual information in [22,1 in 6mFa]. We propose to minimize the mutual information between $z^c$ and $z^k$, $k \neq c$, as a necessary condition to disentangle a DPM. In the following, we transfer the minimizing of mutual information to the constraints in the representation space. We denote $\hat x_0$ is sampled from the pre-trained unconditioned DPM using $\epsilon_\theta(x_t,t)$, $\hat x^c_0$ is conditioned on $z^c$ and sampled using $G_\psi(x_t,z^c,t)$. Then we can extract the representations from the samples $\hat x_0$ and $\hat x^c_0$ with $E_\phi$ as $\hat z^k = E_\phi^k(\hat x_0)$ and $\hat z^{k|c} = E_\phi^k(\hat x^c_0)$, respectively. According to Proposition 1 (Proof sketch in Fig. B), we can minimize an proposed variational upper bound to minimize the mutual information. **Proposition 1** For a PGM in Fig. A (a), an upper bound of the mutual information between $z^c$ and $z^k$ (where $k \neq c$) is as following: $$ \min \mathop{\mathbb{E}}\limits_{k,c,x_0,\hat{x}_0^c} \|\hat{z}^{k|c} - \hat{z}^k\|- \|\hat{z}^{c|c} - \hat{z}^c\| + \|z^c - \hat{z}^{c|c}\| (6) $$ Pdf: /pdf/7602caf769e69b5010e8491c7aed8c28004e4b46.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations
Reject
Summary: The authors propose EquiformerV2, which is an improvement over the original Equiformer architecture. The main improvement is using a more efficient parameterization of the tensor products used in Equiformer which is computationally expensive for higher order representations. The more efficient parameterization involves using SO(2) linear layers instead of SO(3) tensor products. Moreover, they have three architectural improvements – adding an extra layer norm during attention, S2 activations instead of gates and "separable" layer normalization which differs from the previous layer normalization in terms of the denominator used. They perform experiments on OC20 to show improvements compared to other models in the literature. Strengths: Overall, the idea of using computing tensor products more efficiently using SO(2) linear layers makes good sense in its application to Equiformer in order to scale it up via higher order SO(3) representations. Empirical work around improving the architecture by appropriate use of different type of layer norms and activation functions is also valuable. The components feeding into this model aren't always original, but that shouldn't detract from the significance of putting together the full model and showing that it improves performance on a benchmark. The clarity of the writing is usually good but there are places where it can be improved (see below). Weaknesses: The main weakness is insufficient comparison against the original Equiformer architecture. There are four differences versus the V1 architecture as far as I can tell: (1) the more efficient parameterization of tensor products via the ideas of eSCN, (2) an extra layer norm applied to scalar features in the attention module, (3) separable S2 activation instead of gates and (4) layer normalization with a different way of computing the denominator (called "separable" layer normalization in the paper). - There is an ablation in Table 1(a) on one of the datasets looking at the effects of changes (2), (3), and (4). It is unclear how much of the difference in performance (specially, on energies) is statistically significant. I understand training each model is expensive, but since the experiments in this table are on the smaller S2EF-2M dataset, isn't it possible to have error bars here? - Why isn't there a comparison in Table 1(a) of the effect of incorporating change (1) in EquiformerV2? The SO(2) linear layers replacing the tensor product are in principle as expressive as the tensor products but the optimization dynamics can be different since the parameterization of the architecture is different. It would be good to see a side-by-side comparison of Equiformer's V1 and V2 when keeping other architectural hyperparameters fixed (e.g. number of channels, maximum representation order etc.) - It would be good to include EquiformerV1 performance in the other sub-tables of Table 1 too, in order to see the cummulative effect of the changes to the original architecture. - Similarly, it would be useful to see EquiformerV1 performance in Table 2 and 3 as well (possibly with a lower $L_\text{max}$ but with other hyperparameters tuned). - How does the setup for IS2RE in Table 2 differ from EquiformerV1's Table 3? Is one with relaxations after training on S2EF and the other is through direct energy prediction? A side-by-side comparison under the same setup would help a lot here. - How necessary are the higher orders compared to increasing the number of layers or channels? SO(2) linear layers instead of tensor products will improve the efficiency for lower orders too, so can we make up for the performance of higher orders by instead increasing the number of layers or channels? - Have you benchmarked EquiformerV2 on QM9 where EquiformerV1 has already been benchmarked? It's a dataset that is different from OC20 in various ways, so it would be useful to see a comparison. For other suggestions for improvements, see the questions below. Equiformer(V1): https://arxiv.org/abs/2206.11990 Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - The explanation of separable S2 activation was unclear: what does line 186 mean? "The activation first converts vectors of all degrees to point samples on a sphere for each channel, applies unconstrained functions F to those samples, and finally convert them back to vectors." - Have you done direct IS2RE prediction as done in the EquiformerV1 paper with/without an auxiliary denoising loss? - Can you justify "higher degrees can better capture angular resolution and directional information"? Perhaps via an example? - Was stochastic depth used in EquiformerV2? Related work section mentions it was used in EquiformerV1? - Line 69: "apply typical functions to rotated features during message passing" was unclear to me. - Line 142: "eSCN convolutions go one step further and replace the remaining non-trivial paths of the SO(3) tensor product with an SO(2) linear operation to allow for additional parameters of interaction between $\pm m_o$ without breaking equivariance." Can you explain this in the context of Appendix A.3? - Line 175: why would $f^{(0)}_{ij}$ be less well-normalized? - Line 178: there is a slight overload in notation with $a$ and $a_{ij}$. - Line 210: why is there no centering in the layer norm formulation? Wouldn't that continue to preserve equivariance as the mean vector belongs to the same representation? There is centering in the 0th degree activations and traditional layer norm. - Line 210: do you use a small $\epsilon$ in the new separable layer normalization denominator for numerical stability? - Line 280: "Finally, these modifications enable training for longer without overfitting (row 7)" don't both models improve performance when trained for longer, not just the model in row 7? - On comparison of speed: are all the models using the same training/inference pipeline such that the comparison is fair? It would be useful to report inference cost in FLOPs or a similar metric as well. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Limitations and broader impact are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > 1. [Weakness 1] Statistic significance of numbers in Table 1(a). We empirically find that training the same models twice with identical settings results in almost the same force MAE with differences less than 0.05 meV/Å. In comparison, increasing the number of epochs from 12 to 20 and from 20 to 30 improves force MAE by 0.68 meV/Å and 0.36 meV/Å, respectively. As in Table 1(a), using the proposed separable S^2 activation and separable layer normalization improve force MAE by 1.09 meV/Å and 0.31 meV/Å. The improvement brought by the proposed architecture is comparable to that by training for longer. But as shown in Table 5 in the appendix, training one base model on S2EF-2M dataset takes 1412 GPU-hours, so it is expensive to repeat for all of Table 1(a) to compute error bars. > 2. [Weakness 2] Comparison between SO(3) convolutions with eSCN parameterization and tensor product parameterization. We provide the comparison in **General Response 8**. > 3. [Weakness 2] Comparison of EquiformerV1 and V2 when keeping other architectural hyperparameters fixed We provide the comparison between EquiformerV1 and EquiformerV2 using the OC20 S2EF 2M dataset in **General Response 7**. > 4. [Weakness 3] Include EquiformerV1 performance in Table 1. Thanks for the suggestion. Since all models use higher degrees (i.e., $L_{max}$ >= 4), which EquiformerV1 cannot use because of huge memory consumption, we will have another separate table summarizing the comparison. > 5. [Weakness 4] EquiformerV1 performance in Table 2 and 3. Since training EquiformerV2 on OC20 S2EF-All and S2EF-All+MD takes 20.5k and 37.7k GPU-hours and training EquiformerV1 can take similar amounts of time, we are unable to provide the results. Moreover, since increasing $L_{max}$ from 2 to 4 can significantly boost performances as shown in **General Response 7** and EquiformerV1 cannot use $L_{max}$ = 4, it can be less motivated to scale up a potentially less performant model. > 6. [Weakness 5] Differences in setups for IS2RE in Table 2 and EquiformerV1's Table 3? The results in Table 2 in this work are obtained by running relaxations. The results in Table 3 in Equiformer are obtained by directly predicting relaxed energies (direct approach). We compare the performances of EquiformerV1 and EquiformerV2 on IS2RE with direct approaches in **General Response 5**. We do not compare the IS2RE results with relaxation approaches since we already know that using $L_{max}$ = 4 can significantly improve force predictions and therefore IS2RE results and that EquiformerV1 cannot use $L_{max}$ = 4 as mentioned in **General Response 7**. > 7. [Weakness 6] Necessity of higher orders. We train different EquiformerV2 models of similar training time (~1440 GPU-hours) but with different $L_{max}$ and numbers of blocks on OC20 S2EF-2M dataset and summarize the comparison in Table VIII in the PDF. Increasing $L_{max}$ from 2 to 4 is clearly better than increasing depth. Increasing $L_{max}$ from 4 to 6 is on par with increasing depth. > 8. [Weakness 7] Benchmark EquiformerV2 on QM9. We provide additional results in **General Response 4**. > 9. [Question 1] The explanation of line 186, "the activation first converts vectors of all degrees to point samples on a sphere for each channel, applies unconstrained functions F to those samples, and finally convert them back to vectors." We provide the details in **Response 8 to Reviewer 6dKw**. > 10. [Question 2] EquiformerV2 results on OC20 IS2RE with direct setting. We provide the results in **General Response 5**. > 11. [Question 3] Justify "higher degrees can better capture angular resolution and directional information". We provide more details in **General Response 10**. > 12. [Question 4] Was stochastic depth used in EquiformerV2? Yes. See Table 4 in appendix. > 13. [Question 5] Apply typical functions to rotated features during message passing. Typical functions refer to those we can use without considering equivariance or any constraint. > 14. [Question 6] Line 142: "eSCN convolutions go one step further and replace the remaining non-trivial paths of the SO(3) tensor product with an SO(2) linear operation to allow for additional parameters of interaction between +-m_0 without breaking equivariance." Line 142 corresponds to Line 436-439. eSCN convolutions directly use w^{\tilde} for learnable parameters. This allows removing the summation over degrees of filter (i.e., $L_{f}$) and removing Clebsch-Gordan coefficients. Since using w^{\tilde} is mathematically equivalent to using w as discussed in the work of eSCN, this will preserve equivariance. > 15. [Question 7] Line 175: why would f_{ij}^{(0)} be less well-normalized? We provide more details about attention re-normalization and separable layer normalization in **General Response 9**. > 16. [Question 8] Line 178: Overload in notation with a and a_{ij}. We will replace a with $w_{a}$ for better clarity. > 17. [Question 9] Line 210: No centering in the layer norm formulation. Adding centering to layer normalization for vectors of degrees > 0 can preserve equivariance. However, in Equiformer, using centering for degrees > 0 can hurt performance, and therefore, Equiformer does not include that. EquiformerV2 follows this practice. > 18. [Question 10] Line 210: small \epsilon in the new separable layer normalization? Yes. > 19. [Question 11] ​​Line 280: "Finally, these modifications enable training for longer without overfitting (row 7)" don't both models improve performance when trained for longer, not just the model in row 7? Yes, we will remove that for better clarity and just mention training for longer can keep improving the performance. > 20. [Question 12] Comparison of speed and FLOPs. Models are trained with the same training/inference pipeline. In **General Response 6**, we provide detailed comparisons of training time, training throughput, numbers of parameters between models in Table 2. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: Thanks for the response -- I've increased my score to a 6. --- Reply to Comment 1.1.1: Comment: Thank you for increasing the score from 5 to 6. Please let us know if you have any other question.
Summary: This paper provides a new equivariant graph neural networks named EquiformerV2 to enhance the original Equiformer performance. It uses four new modules. The first module is to use the convolution in the eSCN (https://arxiv.org/abs/2302.03655) to replace the depth-wise tensor production accelerating the speed. The second module is the separable $S^{2}$ activation using the spherical grids in SCN (https://arxiv.org/abs/2206.14331) to encourage the non-linearity. The third module is the separable layer normalization which uses the variance of all equivariant feature $\ell \geq 1$ to do normalization. The fourth module is the attention re-normalization using a layer normalization before the non-linear function of the attention score branch. Strengths: 1. This paper is well organized and written. The figure clearly shows the modification on the model architecture. 2. The experiments in Table 2 supports the proposed model architecture can achieve SOTA performance on the OC20 All training sets as well as OC20 All+MD. For example, on OC20 All, test energy MAE is improved 13meV in S2ET test and 25meV in IS2RE test. Such improvement is great. These two datasets usually take extensive training time to perform experiments on them. From the Throughput metric, the EquiformerV2 has better training efficiency compared to current baselines. Weaknesses: 1. As a suggestion, it will be better if efficiency study includes both training time and inference time. Computational complexity metric such as FLOPs or MACs can help measure the inference complexity. 2. The description of spherical grid is not very clear in the paper. Although it is introduced in the SCN paper, I think a brief introduction can help people understand why such operation can enhance the non-linearity. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. For the speed up of DFT calculations, could you briefly introduce the AdsorbML and explain why EquiformerV2 can be used to accelerate the DFT? 2. Is it possible to apply such efficient equivariant architecture in directly accelerating the DFT calculations such as the SchNorb (https://www.nature.com/articles/s41467-019-12875-2), PhiSNet (https://arxiv.org/abs/2106.02347), QHNet (https://arxiv.org/abs/2306.04922) and recent QH9 dataset (https://arxiv.org/abs/2306.09549)? Since the computational complexity is also a problem for these equivariant networks. 3. For the equivariance, since the equiformerV2 uses the spherical grids, I am curous about whether equiformerV2 is rigorous equivariant or approximate equivariant. If this architecture is approximate equivariant, I think it will be great if there are some experiments to verify the equivariance. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: 1. As an suggestion, to comprehensively study the improvement on original Equiformer, it will be better if there is a comprehensive experiments on the original datasets such as QM9. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > 1. [Weakness 1] As a suggestion, it will be better if efficiency study includes both training time and inference time. Computational complexity metric such as FLOPs or MACs can help measure the inference complexity. We provide the comparison in **General Response 6**. > 2. [Weakness 2] The description of spherical grid is not very clear in the paper. Although it is introduced in the SCN paper, I think a brief introduction can help people understand why such an operation can enhance the non-linearity. We provide more details about S^2 activation in **Response 8 to Reviewer 6dKw**. Moreover, since the inner products between one channel of vectors of all degrees and the spherical harmonics projections of sampled points sum over all degrees, the conversion to 2D grid feature maps implicitly considers the information of all degrees. Therefore, S^2 activation, which converts equivariant features into 2D grid feature maps, uses the information of all degrees to determine the non-linearity. In contrast, gate activation only uses vectors of degree 0 to determine the non-linearity of vectors of higher degrees. More concretely, gate activation applies sigmoid to vectors of degree 0 to obtain non-linear weights and then multiply vectors of higher degrees with those non-linear weights. For tasks such as force predictions, where the information of degrees is critical, S^2 activation can be better than gate activation since S^2 activation uses all degrees to determine non-linearity. > 3. [Question 1] For the speed up of DFT calculations, could you briefly introduce the AdsorbML and explain why EquiformerV2 can be used to accelerate the DFT? We provide the details in **Response 5 to Reviewer 3XDb**. > 4. [Question 2] Is it possible to apply such efficient equivariant architecture in directly accelerating the DFT calculations such as the SchNorb, PhiSNet, QHNet and recent QH9 dataset? Since the computational complexity is also a problem for these equivariant networks. Yes, this is a great suggestion, and it is possible to apply EquiformerV2 to those problems, but implementing them would require substantial architectural changes and can be future work. Concretely, since they are predicting Hamiltonian matrices, they need complete graphs, where all nodes are connected by edges. Besides, in addition to node features, they need to maintain edge features beyond just for message passing. Finally, they also need inverse tensor products applied to edge features to predict Hamiltonian matrices at the output. These modifications are beyond the scope of this work, but we do agree that higher degrees with higher efficiency can be very helpful for predicting Hamiltonian matrices. > 5. [Question 3] For the equivariance, since the EquiformerV2 uses the spherical grids, I am curious about whether EquiformerV2 is rigorous equivariant or approximate equivariant. If this architecture is approximate equivariant, I think it will be great if there are some experiments to verify the equivariance. We provide the details in **Response 8 to Reviewer 6dKw**. eSCN empirically computes such errors in Figure 9 in their latest manuscript and shows that the errors of using $L_{max}$ = 6 and sampling resolution R = 18 are close to 0.2%, which is similar to the equivariance errors of tensor products in e3nn. The equivariance errors in e3nn are due to numerical precision. As long as we choose a high R, the equivariance errors can be empirically kept at the same level as the errors of those strictly equivariant operations. > 6. [Limitation] As a suggestion, to comprehensively study the improvement on original Equiformer, it will be better if there are comprehensive experiments on the original datasets such as QM9. We provide additional results on QM9 dataset in **General Response 4**, additional results on OC20 IS2RE dataset in **General Response 5**, and additional results on OC20 S2EF 2M dataset in **General Response 7**. In general, EquiformerV2 improves upon Equiformer when the targeted datasets or tasks such as OC20 S2EF and OC20 IS2RE with IS2RS auxiliary task require higher degrees or more expressivity. For the smaller datasets like QM9, the benefits of using stronger models are not very obvious. We note that this is consistent with the findings in previous works like Equiformer [1] (Section G in appendix) and Noisy Nodes [2]. Additionally, we note that Row 1 in Table 1(a) does correspond to the architecture of the original Equiformer except that we use efficient SO(3) convolutions to incorporate higher degrees to isolate the effect of the proposed architectural improvements. [1] Liao et al. Equiformer: Equivariant Graph Attention Transformer for 3D Atomistic Graphs. ICLR 2023. [2] Godwin et al. Simple GNN Regularisation for 3D Molecular Property Prediction and Beyond. ICLR 2022.
Summary: In this paper, the authors proposed EquiformerV2, which is an equivariant network for 3D molecular modeling. The EquiformerV2 is built on the Equiformer with several architectural modifications: 1) replace SO(3) convolutions (tensor product operations) with efficient SO(2) counterparts from eSCN; 2) Attention Re-normalization; 3) Separable S2 activation; 4) separate Layer Normalization. These changes enable EquiformerV2 to achieve good performance on the large-scale OC20 benchmark and also the new AdsorbML dataset. Strengths: 1. The targeted problem is of great interest to the community. The EquiformerV2 provides another attempt to enlarge the maximum degree of irreducible representations and obtain performance gains on large-scale DFT benchmarks. 2. Good empirical performance. On the OC20 benchmark, the EquiformerV2 achieves state-of-the-art performance on the Structure-to-Energy-Force task. The model trained on this task further serves as a force-field evaluator to achieve strong performance on IS2RS and IS2RE tasks. The EquiformerV2 outperforms the compared baselines on all these tasks, especially on force prediction. Additionally, it also improves the success rate a lot on the AdsorbML dataset. Weaknesses: 1. The novelty of integrating the eSCN convolution and S^2 activation into the Equiformer is limited. Among the proposed architectural changes, the eSCN convolution is the key component to enable Equiformer to use irreducible representations of higher degrees, and the S^2 activation also replaces all non-linear activations. However, these design strategies should be mainly credited to the eSCN work. 2. The motivation for the other architectural modifications should be thoroughly clarified. First, lines 174-175 in Section 4.2 suggest the "less well-normalized" issue, which motivated the authors to propose re-normalization and Separable Layer Normalization. It is better to provide further quantitive evidence to reveal how such an issue affects the model's performance, and why these modifications could remove or mitigate such effect. Second, the authors proposed Separable S^2 Activation because the original S^2 activation would make the training process diverge. However, it is hard to understand why such separable modifications could make the training process stable. Is there any further essential reason behind such a phenomenon? It is suggested to provide further analysis on such modifications. 3. More analyses are necessary if the computational resources are acceptable. First, the authors did not provide a performance comparison between whether using the eSCN convolution. Although the eSCN convolution is equivalent to the SO(3) counterpart, the computation processes of these two parameterizations are different. Second, it is suggested to further report the number of model parameters and memory costs for the EquiformerV2 and the compared baselines in Table 2. Third, how does EquiformerV2 perform on the IS2RS and IS2RE using the direct setting (use or not the denoising setting) that is the same as EquiformerV1? Overall, the major weakness of this work lies in the novelty, unclear motivations, and incomplete analyses. If the authors could well address the above concerns, I would like to increase my scores. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Towards the claim "Higher degrees can better capture angular resolution and directional information, which is critical to accurate prediction of atomic energies and forces". [Lines 32-33, also see Lines 97-98], could you further provide explanations and evidence from deep learning molecular models? 2. How is the S^2 activation implemented? Is there any sampling process inducing randomness or incompleteness to make the module not strictly equivariant? If so, is there any measurement of such approximated equivariance? 3. Could you briefly introduce how the EquiformerV2 model accelerates the DFT calculation in AdsorbML? Are there further quantitative results on the error measure (e.g., MAE) between the EquiformerV2 predictions and DFT labels? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors carefully discuss the broader impact and limitations of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > 1. [Weakness 1] The novelty of integrating the eSCN convolution and S^2 activation into the Equiformer is limited. We would like to clarify that the contribution of this work is to investigate whether the design choices of previous equivariant Transformers, which consider only lower degrees, can scale well to higher degrees as mentioned in the section of abstract. Replacing original tensor products with eSCN convolutions is necessary to scale to higher degrees, and we are interested in what architectural changes we should have after using eSCN convolutions. As discussed in this work, we need to add one additional normalization in the attention blocks, modify activation functions and modify equivariant normalization layers to better leverage the benefits of higher degrees brought by eSCN convolutions. We also show that the original Equiformer architecture is sub-optimal when higher degrees are used as in Table 1(a). Additionally, we found that directly using S^2 activation does not result in stable training. We then proposed separable S^2 activation, which applies standard (typical) activation functions to invariant features (L = 0) and S^2 activation to equivariant features (L > 0). While these modifications might seem simple, we see simplicity as a strength, it takes extensive empirical investigation to attain it, and as noted by reviewer 9yvD, we hope that it doesn’t “detract from the significance of putting together the full model and showing that it improves performance on a benchmark.”. > 2. [Weakness 2] The motivation for attention re-normalization and separable layer normalization. We provide more details about attention re-normalization and separable layer normalization in **General Response 9**. > 3. [Weakness 2] Training instability of S^2 activation and motivation for separable S^2 activation. We found that row 3 (with S^2 activation) in Table 1(a) has 10X larger gradient norm than row 4 (with separable S^2 activation) after 25% of the training. The sudden increase in gradient norm results in training instability. In contrast, gate activation, where scalars are updated with typical SiLU activation and transformed on their own, does not have any training instability. Therefore, this motivates applying separate activation functions to scalars and vectors of degrees > 0. We will add this observation and motivation to the manuscript. > 4. [Weakness 3] Comparison between SO(3) convolutions with eSCN parameterization and tensor product parameterization. We provide the comparison in **General Response 8**. > 5. [Weakness 3] Second, it is suggested to further report the number of model parameters and memory costs for the EquiformerV2 and the compared baselines in Table 2. We provide the comparisons of training time, training throughput and numbers of parameters in **General Response 1**. We report the maximum batch size each model can use on a single V100 GPU (32GB) as below. | Model | Maximum batch size | |-----------------------------|---------------------| | GemNet-OC-L | 6 | | SCN L=6 K=16 (4-tap 2-band) | 4 | | SCN L=8 K=20 | 8 | | eSCN L = 6 K = 20 | 4 | | EquiformerV2 (31M) | 14 | | EquiformerV2 (153M) | 4 | > 6. [Weakness 3] Comparison of EquiformerV1 and EquiformerV2 on OC20 IS2RE with a direct setting. We provide the results in **General Response 5**. > 7. [Question 1] Towards the claim "Higher degrees can better capture angular resolution and directional information, which is critical to accurate prediction of atomic energies and forces". [Lines 32-33, also see Lines 97-98], could you further provide explanations and evidence from deep learning molecular models? We provide more details about “higher degrees can better capture angular resolution and directional information” in **General Response 10**. > 8. [Question 2] Implementation of S^2 activation. Our implementation is the same as that in e3nn, SCN and eSCN. We uniformly sample a fixed set of points on a unit sphere along the dimensions of longitude (alpha) and latitude (beta). We set the resolutions R of alpha and beta to be 18 when $L_{max} = 6$, meaning that we will have 324 (=18 * 18) points. Once the points are sampled, they are kept the same during training and inference, and there is no randomness. For each point, we compute spherical harmonics projection of degrees up to $L_{max}$. We consider an equivariant feature of C channels and each channel contains vectors of all degrees from 0 to $L_{max}$. When performing S^2 activation, for each channel and for each sampled point, we compute the inner product between the vectors of all degrees contained in one channel of the equivariant feature and the spherical harmonics projections of a sampled point. This results in R * R * C values, where the first two dimensions (R * R) correspond to grid resolutions and the last corresponds to channels. They can be viewed as 2D grid feature maps and treated as scalars, and we can apply standard (typical) activation functions like SiLU or use standard linear layers performing feature aggregation along the channel dimension. After that, we project back to vectors of all degrees by multiplying those values with corresponding spherical harmonics projections of sampled points. Although there is sampling, Spherical CNNs and eSCN mention that as long as R is high, the equivariance error can be close to 0. eSCN empirically shows that the errors are similar to numerical errors of strictly equivariant operations. > 9. [Question 3] More details about AdsorbML and further quantitative results. Please refer to **Response 5 to Reviewer 3XDb** for details about AdsorbML. We provide more quantitative results **General Response 2**.
Summary: This paper propose EquiformerV2, a advance verison based on Equiformer and eSCN structure extend to higher degree representations, which achieve better performance in force and energy tasks. Strengths: This paper is well-written and organized, presenting a clear and coherent structure throughout. The introduced EquiformerV2, an upgraded version of the original Equiformer, including three architectural improvements: attention re-normalization, separable S^2 activation, and separable layer normalization. These enhancements contribute to the SOTA performance in OC20 dataset. The proposed model achieves a high degree representation with efficiency, as outlined in the paper. The authors also provide a comprehensive ablation study to support the necessity of these modifications, effectively highlighting their respective contributions to the overall performance improvement. Weaknesses: * A few spelling errors. For instance, in Section 6, the word "acknolwdge" etc. Along with any other mistakes found throughout the manuscript. * The experiments conducted in this study primarily utilize the OC20 dataset. While this dataset is relevant, it is essential to note that there are various other DFT-based datasets available that could provide a more comprehensive evaluation of the proposed architecture. Such as OC22, OQMD[1,2], SPICE[3], and PCQM4Mv2 etc. [1] Saal, J. E., Kirklin, S., Aykol, M., Meredig, B., and Wolverton, C. "Materials Design and Discovery with High-Throughput Density Functional Theory: The Open Quantum Materials Database (OQMD)", JOM 65, 1501-1509 (2013). [2]. Kirklin, S., Saal, J.E., Meredig, B., Thompson, A., Doak, J.W., Aykol, M., Rühl, S. and Wolverton, C. "The Open Quantum Materials Database (OQMD): assessing the accuracy of DFT formation energies", npj Computational Materials 1, 15010 (2015). [3] SPICE, A Dataset of Drug-like Molecules and Peptides for Training Machine Learning Potentials Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Would it be possible for you to include the results from the small dataset, as you mentioned? This would provide valuable insights into the performance and scalability of your proposed approach. * More details about structural relaxations, specifically, the relax trajectories and time efficiency. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: same above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > 1. [Weakness 1] A few spelling errors. For instance, in Section 6, the word "acknolwdge" etc. Along with any other mistakes found throughout the manuscript. Thanks for finding this. We will double check the paper and correct spelling mistakes if we find one. > 2. [Weakness 2] The experiments conducted in this study primarily utilize the OC20 dataset. While this dataset is relevant, it is essential to note that there are various other DFT-based datasets available that could provide a more comprehensive evaluation of the proposed architecture. Such as OC22, OQMD, SPICE, and PCQM4Mv2 etc. We provide additional results on OC22 dataset in **General Response 3**. > 3. [Question 1] Would it be possible for you to include the results from the small dataset, as you mentioned? This would provide valuable insights into the performance and scalability of your proposed approach. We provide additional results on QM9 dataset in **General Response 4** and additional results on IS2RE with a direct approach in **General Response 5**. > 4. [Question 2] More details about structural relaxations, specifically, the relax trajectories and time efficiency. Some relevant details of running relaxations can be found in Section C.2 in appendix. Specifically, after training models on OC20 S2EF dataset, we use LBFGS optimizer implemented in Open Catalyst GitHub repository to update the atomic positions given their atomwise forces predicted by trained models. Once the atomic positions are updated, we run models to predict the updated atomwise forces. The process of running relaxations, updating atomic positions and running trained models to predict atomwise forces, is repeated for a pre-defined number of steps or until the maximum predicted force per atom is less than 0.02 eV/Å.The predefined numbers of steps are 200 for OC20 IS2RE and IS2RS and 300 for AdsorbML. Additionally, the structures in the OC20 dataset have pre-defined sets for adsorbate atoms, surface atoms and subsurface atoms, and only the positions of first two types of atoms are updated during relaxations. The above setting is the same as previous works. Please refer to their paper [1] and GitHub repository [2] for more details and let us know if more specific details can be helpful. We additionally provide more details about AdsorbML [3] in the next response. Please let us know if there are any other details we can help provide on the relaxation trajectories. The amount of time required to run relaxations can also be found in Section C.2 in appendix. For the smaller version of EquiformerV2 trained on OC20 S2EF-All+MD dataset in **General Response 1**, the time for running relaxations is 240 GPU-hours for OC20 IS2RE and IS2RS and 298 GPU-hours for AdsorbML. The smaller version takes 4.48X and 3.61X less GPU-hours than the one reported in Table 2. Additionally, we compare GPU-seconds of different models in Table 3 required to run relaxations for AdsorbML in **General Response 2**. We note that the smaller version of EquiformerV2 in **General Response 1** is more accurate than other models and is 9.8X faster than SCN and 3.7X faster than GemNet-OC-MD-Large. [1] Chanussot et al. Open Catalyst 2020 (OC20) Dataset and Community Challenges. ACS Catalyst 2021. [2] https://github.com/Open-Catalyst-Project/ocp [3] Lan et al. AdsorbML: Accelerating Adsorption Energy Calculations with Machine Learning. ArXiv 2023. > 5. Additional details about AdsorbML. AdsorbML [1] aims at finding the global minimum of relaxed energy, or adsorption energy, given an adsorbate and a catalyst. The algorithm first generates some configurations of the adsorbate on the catalyst surface based on some heuristics or randomly, and it then runs relaxations for each of the resulting initial structures formed by those generated configurations. After running relaxations, some relaxed structures that do not satisfy predefined constraints (e.g., dissociations or desorptions) are removed, the relaxed energies of the remaining relaxed structures are calculated, and the lowest relaxed energy is considered the adsorption energy. Once we finish training models (e.g., EquiformerV2) on OC20 S2EF datasets, we can use the models for both relaxations and predictions of energies of relaxed structures. For relaxations, we use models to predict atomwise forces to update the atomic positions until the predicted forces are below a certain threshold or we reach a predefined number of optimization steps. Since originally the atomic positions are updated with compute-intensive DFT calculations, using ML models to replace DFT for force calculations can accelerate the process of AdsorbML. For predicting the energies of relaxed structures, we can either use single-point DFT (i.e., only one DFT calculation) or trained models. AdsorbML uses single-point DFT to predict the energies of relaxed structures by default since they find that it strikes a good balance between compute and accuracy. [1] Lan et al. AdsorbML: Accelerating Adsorption Energy Calculations with Machine Learning. ArXiv 2023. --- Rebuttal Comment 1.1: Comment: thanks for your response, I decide to maintain my initial scores for your submission after full consideration. --- Reply to Comment 1.1.1: Comment: Thanks for your response. We believe we addressed all your comments, but please let us know if you have any other question.
Rebuttal 1: Rebuttal: Thank you for all the constructive feedback! We are glad reviewers found the writing clear (3XDb, GdTP, 9yvD) and the empirical results on OC20 and AdsorbML impressive (6dKw, GdTP, 9yvD, 3XDb). We address general questions here: > 1. Smaller EquiformerV2 We trained a smaller version of EquiformerV2 on OC20 S2EF-All+MD dataset and include the results on OC20 and AdsorbML in the PDF (Tables I and II). > 2. Energy MAE, speed-accuracy trade-off for AdsorbML. See Table II in the PDF. > 3. Results on the OC22 dataset We train EquiformerV2 with $L_{max} = 6$, $M_{max} = 2$ and the number of blocks = 18 on OC22 S2EF-Total dataset and summarize the comparisons in Table III in the PDF. > 4. Results on the QM9 dataset. We compare EquiformerV2 with $L_{max} = 4$, $M_{max} = 4$, number of blocks = 6 with EquiformerV1 in Table IV in the PDF. Unlike OC20 S2EF, using higher degrees ($L_{max}$ increased from 2 to 4) does not result in a significant improvement on QM9. This is not surprising as QM9 has fewer examples and smaller graphs, resulting in tasks that are simpler than the OC20 S2EF task and therefore not requiring higher degrees or better expressivity. > 5. Results on OC20 IS2RE using a direct approach. We compare EquiformerV1 and EquiformerV2 on OC20 IS2RE with and without IS2RS auxiliary task in Tables V in the PDF. Using higher degrees in EquiformerV2 does not improve performance on IS2RE. However, when using IS2RS, EquiformerV2 improves upon EquiformerV1. Higher degrees with better expressivity can lead to overfitting if the task (IS2RE only) does not require better expressivity [1]. For the task of IS2RE + IS2RS, where better expressivity does translate to better performance [1], EquiformerV2 clearly improves energy MAE on all splits. [1] Godwin et al. ICLR 2022. https://arxiv.org/abs/2106.07971 > 6. Comparison of training time, inference speed, numbers of parameters and FLOPs between SCN, eSCN and EquiformerV2. We compare training time, training throughput and number of parameters of different models and compare FLOPs between SCN, eSCN and EquiformerV2. We note that all the models contain node-wise and pair-wise representations without triplet representations or quadruplet representations as used in GemNet-OC. As the numbers of representations can depend on datasets, we consider the comparison between SCN, eSCN and EquiformerV2 for simplicity and fairness. The comparison is in Table I in the PDF. Table 5 in appendix already reported training time, inference speed, and number of parameters of EquiformerV2. > 7. Comparison between EquiformerV1 and EquiformerV2 on OC20 S2EF 2M. We train EquiformerV1 and EquiformerV2 with $L_{max} = 2$, the number of channels = 128 for each degree and the number of blocks = 8 on OC20 S2EF-2M dataset. The results are summarized in Table VI in the PDF. EquiformerV2 requires 2.3x less training time and achieves better force MAE but slightly worse energy MAE than EquiformerV1. For EquiformerV2, we additionally increase $L_{max}$ from 2 to 4 to show the further performance gain. We cannot train EquiformerV1 with $L_{max} > 2$ due to huge memory cost. > 8. Comparison between SO(3) convolutions with eSCN parameterization and tensor product parameterization. We use EquiformerV2 with $L_{max} = 6$, $M_{max} = 2$ and the number of blocks = 12 to compare SO(3) convolutions with different parameterizations. We summarize the results in Table VII in the PDF. > 9. Clarification on attention re-normalization and separable layer normalization (LN). We use $f_{ij}^{(0)}$ (line 170-173) as an example. $f_{ij}^{(0)}$ is obtained by performing a linear operation aggregating all C channels of m = 0 components from all degrees (0 to $L_{max}$). This can be viewed as a feature of C * ($L_{max} + 1$) channels. LN in Equiformer normalizes each degree independently, which is similar to first dividing channels into ($L_{max} + 1$) groups and then normalizing each group. This can ignore relative importance of different channels. For instance, if we have two groups, and channels of the first group are from a normal distribution and those of the second group from the same distribution but scaled up by a large magnitude, the second group can contain more information since its spread is larger. However, after the normalization, the two groups will collapse to the same distribution, and relative importance is not preserved. In contrast, if we use normalization considering all channels in all groups, relative importance is maintained. We posit that maintaining relative importance can make training more stable. Besides, ViT-22B [1] also uses additional LN to queries and keys when performing dot product attention. The extra LN can prevent attention logits (inputs to softmax) from growing too large when scaling up channels and thus prevent one-hot attention weights (outputs of softmax). This applies to our case. When we increase $L_{max}$, we increase the number of channels $(C * (1 + L_{max}))$ of the feature, from which we obtain $f_{ij}^{(0)}$ with a linear operation. Therefore, if we apply the extra LN to $f_{ij}^{(0)}$, we can make sure the input to subsequent softmax does not grow too large when we increase $L_{max}$ and therefore prevent the undesired one-hot attention weights. As in Table 1(a), we see clear improvements after adding attention re-normalization and separable LN. We will add these to the manuscript. [1] Dehghani et al. Scaling Vision Transformers to 22 Billion Parameters. ICML 2023. > 10. More details about “higher degrees can better capture angular resolution and directional information”. Like higher frequencies in a Fourier transform better approximate the underlying signal, higher angular frequencies or degrees of spherical harmonics offer higher fidelity for spherical functions. Several referenced works (e.g. NequIP, SCN, eSCN) show that higher degrees are helpful for energy and force predictions. Please let us know if more details are needed. Pdf: /pdf/9a6fbdd07efdd52e5f936a662eca45825c1a2626.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
RECESS Vaccine for Federated Learning: Proactive Defense Against Model Poisoning Attacks
Accept (poster)
Summary: In this paper, the authors propose RECESS, which is a proactive defense against untargeted model poisoning attacks. Specifically, RECESS proactively detects malicious clients with test gradients and robustly aggregates gradient with a new trust scoring based mechanism. For the former, the key insight is that the update goals of malicious clients are different from benign clients and outliers. Therefore, the defender can amplify this difference by delicately constructing aggregation gradients sent to clients, and then more accurately detect malicious clients and distinguish benign outliers by the comparison between constructed gradients and the corresponding responses. For the latter, a new trust scoring based mechanism is introduced to estimate the score according to the user’s performance over multiple iterations. Experiments on four datasets and various settings including white/black-box, cross silo/device FL, etc are conducted. The authors also compare the proposed approach with five classic and two SOTA defenses. Strengths: The paper is well-motivated to solve two limitations in existing poisoning defense: (1) existing methods are susceptible to the optimization-based model poisoning attacks, leading to accuracy loss, and (2) existing methods cannot distinguish malicious gradient and benign outliers, reducing the generalization performance of the trained model. The proposed method is well-justified both theoretically and experimentally. Extensive evaluations on four datasets with various FL settings and attack settings, as well as the comparison with five classic and two SOTA defense are provided. Weaknesses: While this is a well-written and well-motivated submission, this reviewer has the following concerns. • Limited capacity of defense. The observation that malicious clients aim to make the poisoned aggregation direction as far away from the aggregation gradient without poisoning as possible to maximize the poisoning effect only holds for untargeted model poisoning. • Discussion of compliance missing. While the authors design the test gradients and send it to the clients for identifying the malicious clients, it is unclear how would the malicious attack comply with the designed the rule? Can malicious clients cheat on the test gradient? • Technical details require additional clarification. (1) It seems for this reviewer, it is not clear how the “slight adjust the direction of g_t_i by heuristically selecting several elements” is done. What are those elements? (2) the models evaluated in the experiment are relatively simple and shallow. This raises the concern of scalability to deeper models. (3) Missing details for IID/Non-IID, cross-silo/cross-device configurations, such as how many clients or devices, how much data per client, how to generate the non-IID division. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weakness. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful comments. We provide our responses and additional evaluations below to address the concerns. --- **Q1:** Limited capacity of defense. **R1:** We appreciate the comment that our method primarily handles untargeted model poisoning. This submission does focus on untargeted attacks, prevalent in prior arts. We believe untargeted attacks pose a more severe threat to FL, for three reasons: 1. Model poisoning attacks directly manipulating gradients are more threatening than targeted backdoor and data poisoning in FL [8, 13], and highly relevant to production FL. 2. Targeted backdoor and data poisoning can be seen as special cases of model poisoning as malicious gradients can be obtained from poisoning datasets [11]. However, it is non-trivial to imitate model poisoning gradients only via data poisoning. 3. Untargeted poisoning degrades model performance more severely than targeted ones in FL [12-14]. Nevertheless, by simply substituting robust aggregation for Byzantine-robust aggregation, our RECESS can effectively defend against other poisoning attacks. To demonstrate RECESS's versatility, we evaluated various attacks, including data poisoning (label flipping) and targeted backdoor (scaling attack). | Dataset | Attacks | FedAvg | FLTrust | DnC | RECESS | RECESS with Median | | -------- | ------------------------------------ | ---------- | ------- | ------ | ------ | ------------------ | | MNIST | No Attack (Model Accuracy) | **0.9621** | 0.9584 | 0.9601 | 0.9598 | 0.9581 | | | Label Flipping (Model Accuracy) | 0.9018 | 0.9448 | 0.9225 | 0.9154 | **0.9548** | | | Scaling Attack (Attack Success Rate) | 1.0 | **0** | 1.0 | 1.0 | **0** | | CIFAR-10 | No Attack | **0.6605** | 0.6341 | 0.6409 | 0.6554 | 0.6383 | | | Label Flipping | 0.4145 | 0.5748 | 0.4284 | 0.4415 | **0.6148** | | | Scaling Attack | 1.0 | 0.0248 | 0.6412 | 1.0 | **0** | The setting of the label flipping attack is the same as [13] and the scaling’s setting is consistent with [23]. The accuracy of the model using RECESS with Median under all attacks rivals conventional models (using FedAvg under no attack). This shows RECESS can effectively defend against diverse FL poisoning. While not demonstrated originally, we respectfully suggest our technique is capable of tackling these attacks straightforwardly. ------ **Q2:** Discussion of compliance missing. **R2:** Thank you for the feedback to strengthen the compliance discussion. Attackers may attempt to evade detection, but we account for potential adaptive attack behaviors and have countermeasures, including increased detection frequency, heightened parameter sensitivity, and delayed reward. These can effectively restrict malicious clients and ensure proper FL system operation. The evaluation of adaptive attacks was covered comprehensively in **Section 4.4**. --- **Q3:** Additional technical details. **Q3.1:** Details of gradient modification. **R3.1:** Thank you for the feedback. Due to page limits, more specifics are in the appendix. As described in **Appendix E.3**, the modification allows arbitrary adjustments to gradient elements within a defined threshold. In experiments, we select and add noise to the first 10% of dimensions, iteratively adjusting until cosine similarity before/after meets the threshold. --- **Q3.2:** The concern of scalability to deeper models. **R3.2:** Thank you for raising this important point. For fair evaluation, we used the same base models as prior works. Nevertheless, our approach can extend to deeper networks. To demonstrate scalability, we conducted additional experiments on larger models including DNN, VGG, and ResNet. As shown in the table below, our method achieves consistent utility for these complex models. | Dataset | Model | Attacks | FedAvg | RECESS | | -------- | -------- | ------------------- | ---------- | ---------- | | MNIST | DNN | No Attack | **0.9487** | 0.9405 | | | | Optimization attack | 0.6873 | **0.9314** | | CIFAR-10 | ResNet20 | No Attack | **0.8449** | 0.8217 | | | | Optimization attack | 0.1718 | **0.8173** | | | | AGR-tailored Attack | 0.1544 | **0.8014** | | | VGG11 | No Attack | **0.7515** | 0.7408 | | | | Optimization attack | 0.5032 | **0.7344** | | | | AGR-tailored Attack | 0.3834 | **0.7037** | These results highlight the general applicability of our framework across model depths. --- **Q3.3:** More details needed on IID/Non-IID and cross-device/cross-silo settings like number of clients, data per client, and how non-IID data is generated. **R3.3:** Similarly, we use the same settings as prior works for fair comparison: * Number of clients/devices specified in **Table 1** for each dataset. For cross-device FL, the server aggregates from 10 random clients per round on CIFAR-10 and 60 random clients on FEMNIST, a common setting in prior work. * CIFAR-10/MNIST: 50 clients with 1000 samples each. Purchase: 80 clients with 2000 samples each. FEMNIST uses a native non-IID partition of 3400 clients owning unique data. * Except FEMNIST is natively non-IID, non-IID CIFAR-10 uses the standard approach from prior work. With *M* classes, clients split into *M* groups. Class *i* example assigned to group *i* with probability *q*, and to any group with probability *(1-q)/(M-1)*. *q=1/M* is IID, larger *q* means more non-IID. Please let us know if any additional clarification on the settings would be helpful. We are happy to provide more details on the configurations used. --- Rebuttal 2: Title: Any additional questions? Comment: Hi Reviewer nW5D #4, Thank you for taking the time to review our work. We wanted to kindly ask if our responses have sufficiently addressed the concerns you previously raised. Please let us know if you need any additional clarification or have any other questions. We are happy to provide more details if needed. Best regards, Authors of Submission7493
Summary: This paper proposes a new defense against model poisoning attacks in FL. The idea is that the server sends some perturbed aggregation gradient to clients in some selected training rounds, and based on the responses, the server adjusts trust scores for the clients. Experimental results show the effectiveness of the proposed defense against state of the art attacks and outperforms state-of-the-art defenses. Strengths: + Important and relevant research problem. + Well written paper. + Interesting and novel approach. Weaknesses: I didn't see severe weaknesses. I think the paper is above the bar. I have three suggestions: 1. Since the paper talks about detection. I would suggest also comparing with detection methods, e.g., the following: FLDetector: Defending federated learning against model poisoning attacks via detecting malicious clients. In KDD, 2022. This detection method may also be adapted to assign trust scores for clients. 2. All the evaluated attacks are for compromised genuine clients. Recent attacks use fake clients, e.g., the following: Mpaf: Model poisoning attacks to federated learning based on fake clients. In CVPR Workshop, 2022. The paper may want to evaluate such attacks. 3. The paper seems to assume that the server sends aggregated gradient to the clients. This is different from standard FL. The paper can make this more clear. Algorithmically, it is equivalent to send the new global model to clients, but it's good to make this clear. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks to the reviewers' insightful suggestions, which are helpful to strengthen the completeness of this work. --- I didn't see severe weaknesses. I think the paper is above the bar. I have three suggestions: **S1:** Since the paper talks about detection. I would suggest also comparing with detection methods, e.g., the following: FLDetector: Defending federated learning against model poisoning attacks via detecting malicious clients. In KDD, 2022. This detection method may also be adapted to assign trust scores for clients. **R1:** Yes, FLDetector also adopts a "trust scoring" approach to identify malicious clients. The key difference lies in how anomalies are detected. We actively send test queries and compare the updates returned from clients, while FLDetector makes predictions based on historical updates. We also evaluated FLDetector, with the following results: | Dataset | Attacks | FLDetector | RECESS | | ---------------------- | -------------------- | ---------- | ------ | | CIFAR-10 (IID) | Optimization attack | 0.6384 | **0.6390**| | | AGR-agnostic Min-Max | 0.5178 | **0.6286** | | CIFAR-10 (Non-IID 0.5) | Optimization attack | 0.6041 | **0.6145** | | | AGR-agnostic Min-Max | 0.5514 | **0.5912** | | CIFAR-10 (Non-IID 0.8) | Optimization attack | 0.3544 | **0.6018** | | | AGR-agnostic Min-Max | 0.0176 | **0.5841** | | FEMNIST | Optimization attack | 0.7217 | **0.7937** | | | AGR-agnostic Min-Max | 0.5616 | **0.7850** | In regular settings, the performance of FLDetector (Median) is comparable to RECESS, but when facing stronger attacks (AGR attack series) or higher Non-IID levels (>0.5 or on FEMNIST task), the model accuracy achieved by RECESS is significantly higher than FLDetector. This is because as the number of benign outliers increases significantly, the fluctuations in clients’ historical gradients become more intense, which greatly reduces the accuracy of FLDetector's predictions. In contrast, RECESS can effectively cope through dynamic testing. Therefore, the comparative experiments with FLDetector further demonstrate RECESS's advantages in tackling the challenging problem of distinguishing between benign outliers and malicious users. --- **S2:** All the evaluated attacks are for compromised genuine clients. Recent attacks use fake clients, e.g., the following: Mpaf: Model poisoning attacks to federated learning based on fake clients. In CVPR Workshop, 2022. The paper may want to evaluate such attacks. **R2:** Thank you for the reviewer's suggestion to evaluate this new attack type named MPAF. We thoroughly read this attack scheme. This attack injects fake clients rather than compromising victim clients, allowing a higher ratio of malicious users. In addition, MPAF uses a very simple attack method that merely attempts to drag the global model towards an attacker-chosen low-accuracy model, with a fixed local update goal and without optimization during poisoning. RECESS does not consider this type of poisoning attack, mainly because: 1. The attacker's assumptions do not conform to real-world FL requirements. We have concerns about its practical feasibility due to the strong assumption of arbitrary fake client injection required. To the best of our knowledge, satisfying such assumptions is highly challenging in practice, as evidenced by the following literature: *Shejwalkar, Virat, Amir Houmansadr, Peter Kairouz, and Daniel Ramage. "Back to the drawing board: A critical evaluation of poisoning attacks on production federated learning." In 2022 IEEE Symposium on Security and Privacy (SP), pp. 1354-1371. IEEE, 2022.* 2. Even if these assumptions held, basic Byzantine-robust aggregation rules could readily defeat the attack. Our proposal can be enhanced by using a common Median or Krum instead of weighted averaging in RECESS to defend against MPAF. 3. Although MPAF submits malicious gradients, it is essentially a data poisoning attack before the FL system starts. Existing detections during training, like FLDetector, also do not consider such attacks. While we appreciate the reviewer's perspective to include recent advancements, given our reservations on the practicality of this attack, we are worried that elaborating on it in depth might give readers the false impression that it poses a serious threat. If the reviewer still believes a comparison is needed, we would be happy to reconsider given further guidance. Otherwise, we hope to focus our manuscript on defending against attacks under more realistic assumptions that are standard in the field. --- **S3:** The paper seems to assume that the server sends aggregated gradient to the clients. This is different from standard FL. The paper can make this more clear. Algorithmically, it is equivalent to sending the new global model to clients, but it's good to make this clear. **R3:** Thank you for pointing this out. Yes, to actively test malicious clients, the server with RECESS deployed needs to send gradients rather than the model weight to clients, and these two approaches are equivalent algorithmically. We will clarify this in a future version. Please let us know if we can provide any clarification or justify our position further. Overall, we sincerely thank the reviewer for the time and help in strengthening our work. --- Rebuttal 2: Title: Thank you for your comments. Comment: Hi Reviewer tdiM #3, Thank you for your thoughtful comments on our paper. We appreciate you taking the time to provide feedback, as it will help strengthen our work. Please let us know if you would like us to clarify or expand on any part of the paper before the end of the reviewer-author discussion period. We look forward to continuing the conversation and thank you again for your review. Best regards, Authors of Submission7493
Summary: The paper presents RECESS, a backdoor defense method in federated learning. The central server keeps querying each client and tries to estimate the trust score of each client to determine if it is malicious or not. The underline intuition of the anomaly detection method is that the malicious client will push the gradient far away from benign directions. Strengths: The paper is clearly written with clear labels and easy-to-follow logic. The method is evaluated on several datasets and federated learning settings. The paper considers the adaptive settings to evaluate its strengths. Weaknesses: The paper does not validate its assumption. The goal of the adversary in poisoning attacks is to mislead the output with the trigger, and there is no need to manipulate the malicious gradient so that it is far away from benign ones. In fact, many works have demonstrated that some "unintended backdoors" only have benign samples, and there are no real malicious triggers. The adversary can later generate triggers based on the trained model (trained on only benign data). As such, I think the paper needs to validate its own assumption or intuition before applying it to practice, especially considering that this may be wrong. I do not see how the method overcomes the limitations of existing methods. The paper mentions that existing work has false positives, i.e., identifying benign ones as malicious. In a typical federated setting with non-IID data, it is common that some clients will have significantly different gradient optimization directions, which can be far away from others and invalidate the detection assumption of this method. How does this method prevent such false positives from happening? The evaluation does not consider methods with theoretical guarantees or stronger adaptive attackers. For example, is it possible to give benign responses to only test gradients while sending malicious gradients to the server? Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The paper does not discuss its limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the valuable feedback. Please find our response below. ___ **Q1:** The paper does not validate its assumption. The goal of the adversary in poisoning attacks is to mislead the output with the trigger, and there is no need to manipulate the malicious gradient so that it is far away from benign ones. In fact, many works have demonstrated that some "unintended backdoors" only have benign samples, and there are no real malicious triggers. The adversary can later generate triggers based on the trained model (trained on only benign data). As such, I think the paper needs to validate its own assumption or intuition before applying it to practice, especially considering that this may be wrong. **R1:** Thank you for the opportunity to clarify the attack assumptions. We agree it is important to articulate these upfront in the submission. Our assumed model poisoning adversary directly manipulates gradients to cause untargeted harm to server aggregation, representing a new type of attack on FL proposed recently. This differs from traditional data poisoning or targeted backdoor attacks. As the reviewer points out, we should state this distinction more clearly in the main sections, not just the Appendix. Model poisoning has distinct motivations and methods compared to data attacks. As cited [8,13], this threat model has been adopted by related works and poses severe risks in real-world FL: 1. Model poisoning via direct gradient manipulation is more threatening than targeted backdoor attacks [8, 13], and highly relevant to production FL. 2. Targeted backdoor attacks are considered special cases of model poisoning as malicious gradients can be obtained on poisoning datasets [11]. However, it is non-trivial to imitate model poisoning gradients only via data poisoning. 3. Untargeted poisoning is a more severe threat to model performance than targeted ones [12-14]. We apologize for the lack of clarity on threat model assumptions and will move the description in “Appendix A.1 & F.2” to **Related Work**. Although RECESS focuses on model poisoning, we show by replacing weighted averaging with Median aggregation, RECESS can mitigate the scaling attack, a representative backdoor. | Dataset | Attacks | FedAvg | FLTrust | DnC | RECESS | RECESS with Median | | :------: | :----------------------------------: | :--------: | :-----: | :----: | :----: | :----------------: | | MNIST | No Attack (Model Test Accuracy) | **0.9621** | 0.9584 | 0.9601 | 0.9598 | 0.9581 | | | Scaling Attack (Model Accuracy) | **0.9542** | 0.9428 | 0.9518 | 0.9539 | 0.9519 | | | Scaling Attack (Attack Success Rate) | 1 | **0.0** | 1 | 1 | **0.0** | | CIFAR-10 | No Attack (Model Test Accuracy) | **0.6605** | 0.6341 | 0.6409 | 0.6554 | 0.6383 | | | Scaling Attack (Model Accuracy) | **0.6329** | 0.6168 | 0.6123 | 0.6314 | 0.6228 | | | Scaling Attack (Attack Success Rate) | 1 | 0.0248 | 0.6412 | 1 | **0.0127** | Additional results demonstrate that RECESS with Median achieves comparable accuracy to the normal model using FedAvg under no attack and resists the backdoor effectively. --- **Q2**: I do not see how the method overcomes the limitations of existing methods. The paper mentions that existing work has false positives, i.e., identifying benign ones as malicious. In a typical federated setting with non-IID data, it is common that some clients will have significantly different gradient optimization directions, which can be far away from others and invalidate the detection assumption of this method. How does this method prevent such false positives from happening? **R2:** We introduce a new detection approach, proactive probing, to address the intractable problem of benign outlier identification that passive statistics-based defenses struggle with, especially when the degree of non-IID is large, this problem is more difficult to solve. Instead of previously taking a single fixed benchmark to determine anomalies, we make judgments based on the consistency of the client's behavior over time to avoid detection errors. Specifically, - The key insight is that malicious and benign clients (including benign outliers) have fundamentally different update goals. By carefully constructing the aggregation gradients sent to clients, we can amplify this difference and more accurately detect malicious behaviors while distinguishing benign outliers. - Additionally, we propose a new trust scoring-based robust aggregation mechanism that estimates scores based on long-term user performance across iterations, rather than scoring each round independently. This improves fault tolerance and robustness. Together, the novel concepts of proactive probing and robust trust scoring allow RECESS to outperform SOTA defenses against the latest poisoning attacks, while accurately identifying benign outlier gradients. --- **Q3:** The evaluation does not consider methods with theoretical guarantees or stronger adaptive attackers. For example, is it possible to give benign responses to only test gradients while sending malicious gradients to the server? **R3:** We compare five classic robust rules and two SOTA works, all with solid theoretical guarantees. We also analyze RECESS under black box and white box settings and provide theoretical guarantees, as shown in **Proposition 1 in Section 3.4 Effectiveness Analysis** in lines 196-205 of page 6. The proof can be found in **Section D Proof of Proposition 1** in lines 447-467 of page 13. Stronger adaptive attackers will struggle to accurately determine if crafted gradients are tests, limiting poisoning impact under our defense. We refer the reviewers to the evaluation of adaptive attacks in **Section 4.4 Adaptive Attacks** in lines 287-312 of page 9 for more details. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification, especially the threat model part. Now it is clear to me, and I have raised the score to borderline accept with some concerns about the significance of the work (Q2). --- Reply to Comment 1.1.1: Title: Thank you and more clarification Comment: Thank you for your feedback and recognition of our previous response. We attempt to further address your concerns about the significance of the work (Q2). We first describe **two limitations we have investigated in the current model poisoning detection field** to better motivate the significance of our work and clarify the key contributions. - Existing defense mechanisms have been shown ineffective against the latest optimization-based model poisoning attacks like the AGR attack series. - As the reviewer also mentioned, current defenses suffer from high false positive rates, i.e. identifying benign outlier clients as malicious, especially in non-IID settings where benign outliers are common yet easily misidentified as malicious. **Why are existing defense mechanisms ineffective?** Mainly because of their reliance on passive statistical analysis of uploaded information from clients, whether via Byzantine-robust aggregation (Krum, Trmean, etc.), uploaded gradient analysis (FLTrust, DnC), or prediction from historical updates (FLDetector). By adopting fixed benchmarks or statistical majorities, these schemes inevitably sacrifice benign outliers, which is also the most challenging issue in anomaly detection. To further showcase significance, we expand the discussion on **how our work addresses the limitations of prior techniques**. To resolve the aforementioned limitations, we take a brand-new, never seen before approach suitable for FL - the server proactively probes and discovers more behavioral characteristics of benign and malicious clients, instead of passively analyzing limited information from clients. Our **key observation** is that while some clients have different gradient directions, the core difference is: benign clients including outliers optimize the received aggregation result to the distribution of their local dataset to minimize the loss value. In contrast, malicious clients aim to maximize the poisoning effect by pushing the poisoned aggregation direction as far away as possible from the non-poisoned aggregation direction. In other words, benign gradients explicitly point towards their local distribution which is directional and relatively more stable, while malicious gradients change inconsistently and dramatically as they are influenced by benign ones. Leveraging this insight, we amplify this difference by delicately constructing aggregation gradients sent to clients, and then more accurately detect malicious clients and distinguish benign outliers by the comparison between constructed gradients and the corresponding responses. Additionally, to increase tolerance and robustness, we do not perform single-round aggregation as in previous work (e.g. FLTrust), because no mechanism can guarantee always detection correctly. Instead, we design a trust score based on the long-term behavior over multiple rounds for each client, which means even if we misdetect some rounds, clients that generally behave well will not be excluded as in previous schemes. **In summary, the significance of this work is:** Unlike previous anomaly detection-based defenses, this work proposes a novel proactive defense, tailored for FL, against the latest model poisoning attacks. We shift from the classic reactive analysis to proactive detection and offer a more robust aggregation mechanism for the server. As evidence, we directly compare to recent SOTA schemes, demonstrating improved defense over current methods on typical datasets under various FL settings, better positioning how our method departs from and advances beyond current literature. This representing a step forward for model poisoning attack detection. Specifically, this work makes significant contributions in : - Discovering and solving the intractable benign outlier identification, improving generalization. To our knowledge, the proposed proactive detection is the first to address the benign outlier issue, effectively differentiating them from malicious data and reducing false positives. - High detection fault tolerance. The benign clients’ contributions are guaranteed by the trust scoring mechanism based on long-term multi-round behavior even with detection errors. This greatly improves fault tolerance and robustness of detection. Meanwhile, adaptive attacks can be effectively handled only by adjusting the detection frequency or the parameter sensitivity, etc. - Practical wide applicability. We consider more practical real-world FL products than previous discussions, particularly the non-IID setting and cross-silo/device scenarios. Our method provides highly robust FL training, promoting the generalization and utility of FL in practice. Please let us know if you have any additional questions or require further clarification. We are happy to address them before the discussion ends. --- Reply to Comment 1.1.2: Title: Is further clarification required? Comment: Hi Reviewer MKss #2, We greatly appreciate you taking the time to provide thoughtful feedback on our initial submission and response. We aimed to thoroughly address your concerns regarding the significance of the work and wanted to check if our explanations and additions were sufficiently clear in demonstrating the novelty and importance of this work. Please let us know if you have any other questions or need any clarification on these points before the reviewer-author discussion ends. We are happy to discuss this further if needed. Thank you again for your time and consideration. Best regards, Authors of Submission7493
Summary: To defend the untargeted poisoning attack on the federated learning, the authors proposed a new defense called as RECESS, which exploits the outlier detection to analyze the gradients returned from clients. Once the gradient from one client is judged as outlier, this client will be considered as malicious client. The outlier detection is based on the intuition, i.e., for the benign samples even including the outliers, their updated gradients always point to the direction leading to a (local) minimum loss value, while the gradients of poisoned data will point to the reversed direction. Strengths: Similar as previous refs [11, 12] as shown in the paper, this work aims to alarm a malicious gradient from a group of gradients updated from clients in the federated learning. The paper is well-written and can be easily understood. Weaknesses: The main weakness of this work is due to its trivial contribution compared with the STOA [11, 12]. The main intuition for outlier detection, as mentioned in ‘Summary’, has been considered. Compared with STOA, the new metric ‘trust scoring’ does not improve too much as shown in Table2. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: For the trust score, is it possible for attackers to first pretend as benign client to increase the trust score, and then provide the poisoning gradient? How should the defender limit this kind of attack? Could the attacker perform the adaptive attack to poison the global model? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: One limitation of this work is that it only considers the non-target attack, which is less stealthy than target backdoor attack. The main reason is that the non-target attack’s goal is to affect the normal performance of the model, which can be easily recognized by the trainer. I recommend the authors mentioned this limitation in the paper. For more details on the difference between target and non-target attack, please check this survey. E. Cinà, K. Grosse, A. Demontis, S. Vascon, W. Zellinger, B. A. Moser, A. Oprea, B. Biggio, M. Pelillo, and F. Roli. Wild patterns reloaded: A survey of machine learning security against training data poisoning. ACM Comp. Surveys, 2023. Guo, Wei, Benedetta Tondi, and Mauro Barni. "An overview of backdoor attacks against deep neural networks and possible defences." IEEE Open Journal of Signal Processing (2022). Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the valuable comments. Please find our responses below. --- **Q1:** The main weakness of this work is its trivial contribution compared to STOA [11, 12]. The intuition for outlier detection mentioned in the Summary has been considered. Compared to STOA, the new 'trust scoring' metric does not improve much as shown in Table 2. **R1:** Our contribution is non-trivial compared with previous SOTA works. We agree existing defenses rely on passive outlier detection, while our RECESS takes a novel proactive testing approach to identify poisoning. As noted, benign outliers and malicious gradients are numerically similar, making the distinction intractable with only existing passive analysis. Our key intuition is benign and malicious behaviors diverge when aggregated gradients are carefully modified for testing. This active detection provides a fresh perspective unseen in current defenses. Leveraging this, RECESS significantly outperforms existing methods against latest model poisoning attacks. The newly proposed "Abnormality" metric (Equation 2), instead of the conventional ‘trust scoring’, provides fundamental advances over previous static defenses. We believe these concepts make our work a non-trivial improvement in poisoning detection. To clarify, Table 2 shows RECESS achieves comparable performance to normal FedAvg under no attacks, and outperforms existing defenses on various tasks under typical settings. For example, under AGR-tailored attack, RECESS-trained model accuracy on FEMNIST is up to 44.84% higher than DnC, and 14.07% higher than FLTrust on CIFAR-10. We further evaluated more challenging settings favoring the attacker, like increasing non-IID (Figure 5), more malicious clients (Figure 6), and stronger threat models. In all scenarios, RECESS's advantages over current defenses become more significant. As defenses face sophisticated practical settings and stronger attacks, RECESS demonstrates greater robustness compared to SOTA approaches. --- **Q2:** For trust score, can attackers first pretend to be benign to increase it, then poison gradients? How to limit this attack? Could attackers perform adaptive attacks to poison the global model? **R2:** We appreciate the concern about inconsistent attacks. However, RECESS detects such behavior inconsistencies over time. The trust scoring of RECESS also incorporates delayed penalties for discrepancies between a client's current and past behaviors (lines 180-184). Additionally, three factors limit intermittent attacks' impact: 1. FL's self-correcting property means inconsistent poisoning is much less impactful. Attackers would not take this approach in practice. 2. In real settings, clients participate briefly, often just once. Attackers cannot afford to waste rounds acting benign before attacking. 3. Defenses aim to accurately detect malicious updates for model convergence. Even if poisoning temporarily evaded detection, attack efficacy would diminish significantly, making it no longer a serious security concern. We evaluate more adaptive attacks in **Section 4.4**, including evasion and poisoning before initialization. Results show RECESS provides robustness against these strong adaptive attacks. --- **Recommendation:** One limitation is this work only considers non-target attacks, which are less stealthy than target backdoor attacks. The reason is that non-target attacks aim to affect model performance, which can be recognized by the trainer. I recommend mentioning this limitation. **Response:** We appreciate the constructive recommendation to evaluate targeted backdoor attacks. This submission focuses primarily on untargeted model poisoning, along with previous SOTA works. We believe model poisoning is particularly impactful to real-world FL deployments for three reasons: 1. Model poisoning via direct gradient manipulation is more threatening than backdoors in FL [8,13]. 2. Backdoors are considered special cases of model poisoning as malicious gradients can be obtained on poisoning datasets [11]. However, it is non-trivial to imitate model poisoning gradients only via training backdoors. 3. Untargeted poisoning degrades overall model performance more severely than backdoors in FL [12-14]. Since RECESS detects inconsistency between client behaviors, it could mitigate targeted attacks from clients with backdoored data by replacing weighted averaging with a Byzantine-robust aggregation like Median. We added experiments showing RECESS with Median effectively defends against the scaling attack, a representative backdoor. RECESS with Median achieves comparable accuracy to the normal model under no attack and resists the backdoor. | Dataset | Attacks | FedAvg | FLTrust | DnC | RECESS | RECESS with Median | | :------: | :----------------------------------: | :--------: | :-----: | :----: | :----: | :----------------: | | MNIST | No Attack (Model Test Accuracy) | **0.9621** | 0.9584 | 0.9601 | 0.9598 | 0.9581 | | | Scaling Attack (Model Accuracy) | **0.9542** | 0.9428 | 0.9518 | 0.9539 | 0.9519 | | | Scaling Attack (Attack Success Rate) | 1 | **0.0** | 1 | 1 | **0.0** | | CIFAR-10 | No Attack (Model Test Accuracy) | **0.6605** | 0.6341 | 0.6409 | 0.6554 | 0.6383 | | | Scaling Attack (Model Accuracy) | **0.6329** | 0.6168 | 0.6123 | 0.6314 | 0.6228 | | | Scaling Attack (Attack Success Rate) | 1 | 0.0248 | 0.6412 | 1 | **0.0127** | We also reviewed the suggested two surveys and will incorporate more comprehensive evaluations of targeted threats in future work to further highlight RECESS's capabilities. We appreciate this insightful discussion to strengthen and broaden the impact of our approach. Please let us know if you have any other suggestions to improve the coverage of additional attack types. --- Rebuttal 2: Title: Has the question been resolved? Comment: Hi Reviewer St5y #1, We greatly appreciate you taking the time to thoroughly review our work and provide constructive comments to improve the manuscript. We hope that our detailed responses in the rebuttal have sufficiently addressed your concerns regarding the contributions of our approach (which differs from the SOTA schemes of passive analysis by providing proactive detection), consideration of adaptive attacks (which we have already analyzed in the paper), and need for supplementary experiments (demonstrating that our method can resist more stealthy targeted backdoor attacks). Please let us know if our explanations and additional details have satisfactorily clarified these points or if you have any other remaining questions. We are happy to provide further information before the discussion period ends to fully address all aspects of your valuable feedback. Thank you again for your time and thoughtful consideration of our work. Best regards, Authors of Submission7493
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Beyond probability partitions: Calibrating neural networks with semantic aware grouping
Accept (poster)
Summary: The paper proposes a novel method for dealing with the well-known problem of miscalibration in deep learning models (and machine learning models more generally). The proposed approach is to partition the input space and fit a calibration function to each set. The paper shows that the idea of partitioning the input space generalizes two special cases --- one-to-one grouping and constant grouping. Experiments are conducted for image classification on the CIFAR10, CIFAR100 and ImageNet datasets. Strengths: I really like what the paper is trying to do. Unfortunately the mathematical presentation contains many technical errors and I have concerns about the theoretical underpinning of the proposed approach as discussed below. Weaknesses: I have two main concerns about the paper. First, the mathematical presentation contains many technical errors: 1. For Lemma 1 to be correct we need the additional assumption that $\forall x \in {\cal D}, \exists G \in {\cal G} \text{ s.t. } g(x) = G.$ This should be explicitly stated. 2. In Definition 2 (Lines 84--90) there appears to be inconsistencies in the definition of a group G. On Line 84 we have $(x, y) \in G$, on Line 90 we have $x \in G$, and on Line 78 we have $g(x) = G$. I understand that the authors may be overloading the notation here so that $x \in G$ implies $g(x) = G$, but it remains to define what $(x, y) \in G$ means. The paper should be rigorous in its mathematical definitions and notation. 3. Equation 13, $g_\phi(x) = g_\phi(z(x))$ makes no sense unless $z$ is the identity mapping (or $g_\phi$ is constant). Moreover, $z$ is defined to be a matrix and given its dimensionality, $|D| \times d_z$, it is unclear what $z(x)$ actually does. I am also concerned about the correctness of the theoretical idea that underpins the approach. Specifically, we can think of the neural network classifier as computing $p(y \mid x)$, which we wish to be calibrated. By partitioning the input space into groups, the method effectively calibrates $p(y \mid x, G)$. However, $p(y \mid x) \neq p(y \mid x, G)$, in general. Rather $p(y \mid x) = \sum_G p(y \mid x, G) p(G \mid x)$. Technical Quality: 1 poor Clarity: 2 fair Questions for Authors: Please respond to the concerns raised in Weaknesses above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 1 poor Presentation: 2 fair Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We express our gratitude to the Reviewer mMe6 for conducting a careful evaluation of our paper and offering valuable suggestions concerning the representation of formulas. We agree with the reviewers' perspectives, acknowledging that in certain equations, we have overloaded symbols without providing intuitive explanations, which may lead to possible misunderstandings. **Q1**: For Lemma 1 to be correct we need the additional assumption. **A1**: *We agree that although this condition is implicitly implied in our statements, a formal definition will be more intuitive and rigorous*. In Definition 1, we defined that "A grouping function $g$ is a mapping from an input $x$ to a group indicator $g(x) = G \in \mathcal{G}$". This definition implicitly requires that the function $g(x)$ is defined on any valid $x$, which is equivalent to $\forall x \in \mathcal{D},\exists G \in \mathcal{G}, s.t. g(x)=G$. We also stated this definition explicitly in our proof of Lemma 1 in the Appendix (Line 11: "The grouping function is defined on all $x \in \mathcal{D}$"). We agree that providing an explicit formulation is more rigorous and intuitive. And we will modify this definition accordingly. **Q2**: In Definition 2 (Lines 84--90) there appears to be inconsistencies in the definition of a group G. **A2**: *Thanks for your advice and sorry for the confusion on the definition of $g(x)=G$, $x\in G$ and $(x, y)\in G$*. We use $x \in G$ and $g(x)=G$ interchangeably, that is, $x \in G \iff g(x)=G$. And the group of a labeled data $(x, y)$ is defined by the group of $x$, that is, $(x, y) \in G$ where $G=g(x)$. We clarify the definition of these symbols in our revised paper. **Q3**: Equation 13, makes no sense unless $z$ is the identity mapping (or $g$ is constant). **A3**: *We aim to introduce our implementation of $g_{\phi}(x)$ in Equation (13)*. The equation $g_{\phi}(x)=g_{\phi}(z(x))$ is meant to indicate that in our implementation, the grouping function depends on the deep feature $z(x)$ rather than raw $x$, which is a more restricted function family. Then we introduce the parameterization of $g_{\phi}(z(x))=\operatorname{softmax}(z(x)\phi_{w} +\phi_b)$. We agree that $g_{\phi}(x)=g_{\phi}(z(x))$ is an inaccurate expression and failed to reflect the difference between $g_{\phi}(x)$ and $g_{\phi}(z(x))$. We will modify Equation (13) to $g_{\phi}(x)=g^\prime_{\phi}(z(x))=\operatorname{softmax}(z(x)\phi_{w} +\phi_b)$ to highlight the difference between $g_{\phi}(\cdot)$ and $g^\prime_{\phi}(\cdot)$. **Q4**: Concerned about the correctness of the theoretical idea that underpins the approach... **A4**: *$p(y|x)=\sum_{G}p(y|x, G) p(G|x)$ is exactly how we calculate the calibrated probabilities*. As pointed out by Reviewer mMe6, by partitioning the input space into groups, the method effectively calibrates $p(y|x,G)$, which corresponds to $\hat{Y}\_{ui}$ in Algorithm 1: Line 7. Here $u$ denotes the $u$-th partition and $i$ denotes the $i$-th group in partition $u$. $\hat{Y}\_{ui}$ denotes the set of group-wisely calibrated probabilities within the group $G\_{ui}$. Then, all the group-wisely calibrated results are gathered in Algorithm 1: Line 9 to obtain calibrated results $\hat{Y}\_u$ with the $u$-th partition. All the calibrated $\hat{Y}\_u$ are averaged to obtain the final prediction $\hat{Y}\_{test}=\frac{1}{U}\sum\_u \hat{Y}\_u$. This is effectively a Monte Carlo estimation of $p(y|x)=\sum\_{G}p(y|x,G) p(G|x)$. In the revised version of the paper, we will optimize the expression of the mentioned formulas as per the reviewers' guidance and incorporate appropriate clarifications. Furthermore, we will thoroughly scrutinize all other equations in the paper to ensure the rigor and comprehensibility of all theoretical definitions and results. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I have more confidence in the correctness of the approach now and will raise my score accordingly. --- Reply to Comment 1.1.1: Comment: We extend our appreciation to the Reviewer mMe6 for your thorough review of the paper and the valuable suggestions for revisions. We will proceed to enhance the paper based on your recommendations.
Summary: This work focuses on improving calibration (e.g., measured with Expected Calibration Error) using partitions of the feature space. This contrasts with most well-established recalibration techniques (isotonic regression, histogram binning, temperature scaling...) that solely use the estimated probabilities of the classifier. Partitions are learned during training with an extra linear layer + softmax on the features extracted by the deep model. Each part of the partition is recalibrated by the accuracy-preserving existing calibration techniques such as temperature scaling or ensembling temperature scaling. This ensures that the proposed method is also accuracy-preserving. The method is benchmarked on 3 datasets (CIFAR10, CIFAR100, ImageNet) and 3 networks on each dataset. The method is compared to 9 existing calibration techniques, such as temperature scaling, isotonic regression, and histogram binning. Strengths: * Learning the partitions using an extra linear layer + softmax is valuable since it is differentiable and enables end-to-end learning jointly with the network. This contrasts with existing works that use hard partitions by thresholding quantiles on a proximity metric [4-5] or by using decision trees [2-3]. Weaknesses: * Positioning in literature: It misses related works that share strong conceptual links with the proposed work. Those works could have been compared either in related work or in experiments. * It is not compared to multicalibration [1], which proposes an algorithm for learning a multicalibrated predictor with respect to any subpopulation class. * The idea of partitioning the network feature space to find local miscalibration has been used before [3-4]. [3] shows that strong local miscalibration arises in modern neural networks and proposes an estimator based on feature space partitions to evaluate them. [4] links this local miscalibration to atypicality. * [5] also links local miscalibration to proximity and proposes an algorithm to recalibrate those subgroups. NB: [5] was released after the NeurIPS submission deadline. I put it for information. * Framing: * The valuable part of finding local miscalibration and correcting them, as it is done in the proposed work, is not to improve calibration but rather to improve estimated individual posterior probabilities (i.e., reducing the epistemic loss) as pointed out in [3], or improving fairness metrics as done in [1]. The problem is that the proposed work is entirely focused on improving calibration, which is blind to local miscalibration. * In addition to the missing related work, the discrepancy between the current framing and the potential of the proposed method can be felt on several levels. In the introduction: "A perfectly calibrated model should be calibrated across any data space partition" (L61). This is false since a perfectly calibrated model in the standard definition of calibration (e.g., in [6] eq. (1)) just needs to be calibrated on level sets of the same predicted confidence (i.e., satisfy $\mathbb{P}(Y=\hat{Y}|\hat{\mathbb{P}} = p) = p$). However, a perfect probabilistic classifier, that is, a classifier that outputs the true individual posterior probabilities $\mathbb{P}(Y=\hat{Y}|X)$, should be calibrated on any data space partition. The exposition of the results focuses on improving ECE (Table 1 & 2). * The improvement in calibration is marginal. * For example, in Table 1, the proposed method GC+TS improves on Temperature Scaling (TS) by absolute differences ranging from 1e-4 to 1e-3 in Expected Calibration Error (ECE), which is an extremely small scale for ECE. Similarly, the proposed method GC+ETS improves on Ensembling Temperature Scaling (ETS) by absolute differences ranging from 1e-4 to 1e-3. In summary, the proposed method has the potential to improve individual posterior probabilities and fairness metrics. Unfortunately, this potential is not exploited in the current framing since it focuses on improving calibration instead. On improving calibration, the proposed method has a too marginal effect. ## References [1] Hebert-Johnson, U., Kim, M., Reingold, O., & Rothblum, G. (2018). Multicalibration: Calibration for the (Computationally-Identifiable) Masses. In Proceedings of the 35th International Conference on Machine Learning (pp. 1939–1948). PMLR. [2] David Durfee, Aman Gupta, & Kinjal Basu. (2022). Heterogeneous Calibration: A post-hoc model-agnostic framework for improved generalization. [3] Alexandre Perez-Lebel, Marine Le Morvan, & Gaël Varoquaux. (2023). Beyond calibration: estimating the grouping loss of modern neural networks. ICLR. (First released on 8 Oct 2022). [4] Mert Yuksekgonul, Linjun Zhang, James Zou, & Carlos Guestrin. (2023). Beyond Confidence: Reliable Models Should Also Consider Atypicality. (First released on 04 Mar 2023, https://openreview.net/forum?id=nPOKJCCvlLF) [5] Miao Xiong, Ailin Deng, Pang Wei Koh, Jiaying Wu, Shen Li, Jianqing Xu, & Bryan Hooi. (2023). Proximity-Informed Calibration for Deep Neural Networks. (First released on 7 Jun 2023). [6] Chuan Guo, Geoff Pleiss, Yu Sun, & Kilian Q. Weinberger. (2017). On Calibration of Modern Neural Networks. Technical Quality: 1 poor Clarity: 2 fair Questions for Authors: * Could you confirm that ECE is the metric plotted in Tables 1 & 2? It is not indicated. * Could you develop on the statistical test used for assessing the significance of the results? What are the p-values obtained with the test? Do you correct for multiple comparisons? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 1 poor Presentation: 2 fair Contribution: 2 fair Limitations: The authors highlight the following limitations: * The method is restricted to deep networks since it works in the extracted feature space. It thus cannot be applied to tree models, for example. * Increasing the number of partitions and using more complex grouping models increased the computational complexity of the method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: I appreciate your provision of recent relevant research. Below, we will expound on the distinctions between our approach and these methods. Moreover, we intend to incorporate an analysis of these references in the revised version of the paper. **Q1**: [1] also proposes an algorithm for learning a multicalibrated predictor with respect to any subpopulation class. **A1**: 1. Although [1] proposed an algorithm to calibrate subpopulation classes, it assumes that the subpopulations are given. While our work proposes a practical method to generate useful partitions in a machine-learning manner. 2. [1] also suggests calibrating on "every subpopulation that can be identified", however, it did not reveal the underlying relationship between calibration and accuracy with the definition in [1] (which is different from ours). Since the one-to-one grouping function is a special case of "any grouping function", our example Example 4 revealed that calibrating on any grouping function is equivalent to the perfect prediction that produces groud-truth $p(y|x)$, which is generally impossible with finite data. Thus, a practical method to find a good grouping function that can improve calibration accuracy is critical, which is our core contribution. 3. [1] did not conduct any practical experiments, while the effectiveness of our method is validated in many real-world datasets and deep models. **Q2**: Comparison with [2]. **A2**: [2] proposed to group the feature space with decision trees, then apply Platt scaling within each group. 1. [2] concentrates on calibration ranking metrics (AUC), while our work concentrates on PCE metrics (ECE) and proper scoring rules (Log-Likelihood). 2. We revealed the connection between calibration and accuracy as discussed in A1.2, which is lacking in [2]. **Q3**: Comparison with [3]. **A3**: [3] estabilish a explained component as a metric that lower-bounding the grouping loss in the proper scoring rule theory, and conducted many experiments to validate their metric. While our work concentrates on improving the calibration and accuracy metrics. (ECE and NLL). **Q4**: Comparison with [4-5]. **A4**: In the view of our framework, both [4] and [5] propose a heuristic grouping function, and apply a specific calibration method within each group. While we agree that manuually designed grouping functions may be more interpretable and may lead to better performance in some specific cases (long-tail data), we aim to propose a learning-based method that can generate partitions in an end-to-end manner. **Q5**: The problem is that the proposed work is entirely focused on improving calibration, which is blind to local miscalibration. **A5**: Sorry for the confusion. As pointed out in our paper, PCE connects calibration and accuray with grouping functions. Since the grouping functions learned by our method is neither solely probability-based (measures calibration) nor one-to-one (measures accuracy), we expected to see improvements in both view. So we report the (top-label) ECE in our paper, and NLL in the appendix Tabble 5 because of the page limit. Our method achieved statisticlly significant improvements in both ECE and NLL. We are not quite sure what is "lcoal miscalibration" mentioned by Reviewer TM5V. Experimentally, our method groups similar data, howerver, the PCE does not require such "local" grouping as discussed in response to A6 to Reviewer KvC9. **Q6**: In the introduction: "A perfectly calibrated model should be calibrated across any data space partition" (L61). This is false since a perfectly calibrated model in the standard definition of calibration (e.g., in [6] eq. (1)) just needs to be calibrated on level sets of the same predicted confidence (i.e., satisfy). However, a perfect probabilistic classifier, that is, a classifier that outputs the true individual posterior probabilities, should be calibrated on any data space partition. **A6**: [6] eq. (1) does not involve any data space parition so the definition in [6] "just needs to be calibrated on level sets of the same predicted confidence". However, when we define calibration with data space paritions, it will become different. What if each group contains $x$ with exactly the same value? The expect $y$ within each group will be exactly $p(y|x)$, which is the core idea of our Example 4. If we measure ECE within such a group, there should be only one level-set with predicted probability $q(y|x)$, and the ECE will be minimized iff. $q(y|x)=p(y|x)$. **Q7**: Could you confirm that ECE is the metric plotted in Tables 1 & 2? It is not indicated. **A7**: We report top-label ECE. **Q8**: Could you develop on the statistical test used for assessing the significance of the results? What are the p-values obtained with the test? Do you correct for multiple comparisons? **A8**: We conducted pairred t-test with 100 runs on different data splits. As reported in the paper (Table 1), following common practice, the differences with $p<0.01$ is considered to be significant. [1] Hebert-Johnson, U., Kim, M., Reingold, O., & Rothblum, G. (2018). Multicalibration: Calibration for the (Computationally-Identifiable) Masses. In Proceedings of the 35th International Conference on Machine Learning (pp. 1939–1948). PMLR. [2] David Durfee, Aman Gupta, & Kinjal Basu. (2022). Heterogeneous Calibration: A post-hoc model-agnostic framework for improved generalization. [3] Alexandre Perez-Lebel, Marine Le Morvan, & Gaël Varoquaux. (2023). Beyond calibration: estimating the grouping loss of modern neural networks. ICLR. (First released on 8 Oct 2022). [4] Mert Yuksekgonul, Linjun Zhang, James Zou, & Carlos Guestrin. (2023). Beyond Confidence: Reliable Models Should Also Consider Atypicality. (First released on 04 Mar 2023, [5] Miao Xiong, Ailin Deng, Pang Wei Koh, Jiaying Wu, Shen Li, Jianqing Xu, & Bryan Hooi. (2023). Proximity-Informed Calibration for Deep Neural Networks. (First released on 7 Jun 2023). --- Rebuttal Comment 1.1: Comment: I thank the authors for their time in answering my comments. I raised my score because I think that learning the partitions in an end-to-end manner is a valuable idea worth sharing with the community and might be used in other related works. However, I am still skeptical about the contribution since the improvements, yet significant according to the tests performed, are marginals. Is this small improvement worth adding this layer of complexity? This questions the usability of the method. Also, focusing on improving ECE/accuracy quite misses the point of working with group calibration. --- Reply to Comment 1.1.1: Comment: We extend our gratitude to the Reviewer TM5v for offering numerous valuable suggestions. ### Our method is worth employing when accuracy preservation is imperative 1. Currently, there are few methods that can enhance calibration performance while maintaining accuracy. Temperature Scaling (TS) and Ensemble Temperature Scaling (ETS) persist as the current state-of-the-art methods, to a certain extent underscoring the challenge of further improvements. In this work, we introduce a pragmatic framework to optimize partitioning, yielding statistically significant improvements. This highlights the viability of this direction for enhancement. Concurrently, we acknowledge that the method we propose for learning partitioning criteria in this paper is far from optimal. *The framework we introduce could serve as a novel paradigm for refining calibration methods in the future, propelling the advancement of calibration techniques*. 2. We comprehend the reviewer's concerns regarding method complexity and performance enhancement. In our supplementary material, we conducted experimental analyses, demonstrating that our approach performs better in terms of training speed when compared with several existing methods (such as DirODIR BBQ, Beta calibration, etc.) on substantial datasets like ImageNet. In terms of implementation, we will make our source code open-source for researchers to utilize our method. Given the scarcity of accuracy-preserving techniques, *we hold the belief that our approach is worth considering for enhancing calibration performance when there is a demand to maintain accuracy*. In many cases, adding an appropriate level of complexity for marginal performance gains is indeed worthwhile. ### We do not target any specific form of local calibration We concur that employing certain local calibration metrics could potentially enhance the demonstration of our method's performance. Similar to practices in some research studies, *if we artificially define a partitioning method, we can directly optimize the calibration error within that partition, consequently enhancing the calibration performance on the corresponding partition in an intuitive manner*. In Section 3.2, "Quantitative Analysis," of the article, we indeed confirm that our method reduces the respective PCE. Nonetheless, our focus within this paper lies on more general partitioning criteria, rather than the performance on a specific partitioning standard. *The grouping function derived from our proposed method does not directly employ any artificial partitioning criteria. Consequently, its partitioning outcomes are challenging to correlate with any intuitive calibration error, such as local calibration*. However, our experiments regarding class-wise ECE, as presented in the rebuttal PDF document and section A6 addressed to Reviewer yhZK, demonstrate that our method still exhibits certain performance improvements in terms of class-level local calibration. On the other hand, *we perceive local calibration itself as a form of prior assumption, asserting that the model should be calibrated within a specific local range*. We believe that if this assumption holds true, not only would there be noticeable enhancements in our directly optimized metric (PCE), but there could also be corresponding improvements in other metrics. In essence, we introduce some prior knowledge to assist the model in predicting probabilities more accurately. Building upon this notion, we have chosen Expected Calibration Error (ECE) and Negative Log-Likelihood (NLL) as the primary evaluation metrics. The experiments indeed indicate that our approach yields significant effects on these metrics.
Summary: This paper proposes a method to calibrate neural networks more effectively. It introduces a general framework called PCE, which aims to provide an explanation for existing calibration methods. The proposed method partitions the input space into groups and minimizes the discrepancy between the predicted probability and the true value within each group, on average. Experimental results demonstrate that this method outperforms standard approaches like temperature scaling (TS). Strengths: - PCE, the framework introduced, offers a unified formulation for existing calibration methods. This may allow for a comprehensive understanding of these methods. - The effectiveness of the proposed method is supported by experimental results. Weaknesses: The paper is not easy to read. I believe there is much room for improving readability. - The term ETS is used without definition (l171) - It took some time to understand that "the number of partitions" is distinct from the number of groups. The study uses ‘partitions’ to mean an ensemble of multiple models. This is confusing and could be improved. - I could not understand how (11) is derived. How is S() determined? - It is difficult to interpret what g_\phi(x) learns. Suppose K=2 as in the experiments. It appears that g_\phi(x) learns binary classification using the pen-ultimate layer feature, which is also employed for the original classification task. However, the purpose and underlying explanation of this process are not clearly explained. - Based on my understanding, g_\phi(x) learns to partition the input space into groups that are the most uncalibrated by minimizing equation (11). Subsequently, performing temperature scaling within each of these groups and aggregating the predictions over different partitions results in improved calibration. - It is crucial to provide a similar, high-level explanation of how the method achieves better calibration, irrespective of whether above understanding is true or not. - There is no explanation what Table 1 etc. report. I assume they are top-level ECE^1 defined in [23]. Another issue is that the improvements the proposed method brings are not substantial, despite the technical complexity involved. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - How can we interpret the mechanism of the proposed method that achieves a better calibration? See also Weaknesses. - Is it sufficient to evaluate only ECE? What about class-wise ECE? - Table 1 shows that for Swin, the original model (uncalibrated) achieves better calibration than TS/GC+TS. Calibration leads to a worse calibration. Why? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: There is no serious issue regarding the limitations of this study. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: I express gratitude for the insightful suggestions you've provided. We will improve the paper based on your suggestions. **Q1**: The term ETS is used without definition (l171) **A1**: We will add the full name of ETS(Ensemble Temperature Scaling) and corresponding reference. **Q2**: It took some time to understand that "the number of partitions" is distinct from the number of groups. **A2**: Sorry for the confusion. Generally, a partition corresponds to a method that splits the data space into disjoint sets, while groups refer to the sets produced by a partition. We will add explanations in the revised paper accordingly. **Q3**: I could not understand how equation (11) is derived. How is S() determined? **A3**: Sorry for the confusion caused by Equation (11). *Equation (11) describes the loss used by the temperature scaling method within each group, which is not a direct estimation of PCE*. Specifically, if we choose $S$ to the average function, so the empirical estimation of $S(G_i)=\frac{1}{|G_i|}\sum_{x, y \in G_i} y$, and the empirical estimation of $S(f_{G_i}(G_i))=\frac{1}{|G_i|} \sum_{x, y \in G_i} f_{G_i}(x)$. Then, we choose the difference measure $\mathcal{L}$ to be the log-likelihood (cross entropy) $\mathcal{L}(S(G\_i), S(f\_{G\_i}(G\_i)))=\sum\_{j} S(G\_i)\_j \log S(f\_{G\_i}(G\_i))\_j$, where $j$ is the class index. The Equation (11) will be minimized by $f_{G_i}(x)=y$, which will also minimize $\mathcal{L}(S(G_i), S(f_{G_i}(G_i)))$. Thus, Equation (11) is a stronger constraint compared with minimizing PCE ($\mathcal{L}(S(G_i), S(f_{G_i}(G_i)))$) directly. Our choice of this objective is motivated by two reasons: First, Equation (11) is able to provide more signals during training since each label $y$ can guide corresponding $f_{G_i}(x)$ directly. On the contrary, if we optimize $\mathcal{L}(S(G_i), S(f_{G_i}(G_i)))$ directly, the labels and predictions are mixed and much label information is lost. Secondly, optimizing Equation (11) aligns well with the calibration method to be used. As we analyzed in A4 to Reviewer yhZK, an objective that aligns with the calibration method may lead to better calibration performance. In the revised paper, we shall provide a comprehensive explanation for Equation (11) to enhance clarity regarding this distinction. **Q4**: It is crucial to provide a high-level explanation of how the method achieves better calibration. **A4**: Thanks for your valuable advice. Since different groups are calibrated with different parameters, the grouping functions should find the partitions that can improve the calibrated performance with the calibration method applied within each group. We agree with Reviewer yhZK that "partition the input space into groups that are the most uncalibrated" is an intuitive and reasonable interpretation. We think a more comprehensive interpretation is that $g_{\phi}(x)$ should learn to generate partitions that best suit the calibrating methods to be used. For example, if a group has only a few elements, this group may be significantly miscalibrated (with a high ECE because of lacking of data). However, the calibration parameters (e.g., $\tau$) are prone to overfitting on this small group and result in significant miscalibration when testing. By joint training of the grouping function $g_{\phi}$ and the calibration method (temperature scaling) on the validation dataset, the grouping function is optimized to improve the overall calibration metrics (PCE), which is less likely to generate extreme partitions that may hurt generalization. **Q5**: There is no explanation what Table 1 etc. report. **A5**: We report top-label ECE. **Q6**: What about class-wise ECE? **A6**: Thanks for your advice. We report class-wise ECE in the rebuttal PDF file. We can observe significant improvements on CIFAR10 and CIFAR100, while the differences on Imagenet are small. We think the reason for the performance on Imagenet is because the number of classes (1000) is significantly large than the number of partitions and groups that we have used in experiments (20 partitions and 2 groups). We have to mention that we achieve such improvements on class-wise ECE without optimizing the class-wise ECE explicitly, which shows the potential of improving calibration metrics within unknown partitions. **Q7**: Table 1 shows that for Swin, the original model (uncalibrated) achieves better calibration than TS/GC+TS. Calibration leads to a worse calibration. Why? **A7**: This is a good question that may reveal the source of performance improvements of ETS compared with TS. We visualize and compare the top-label confidence calibration of TS and ETS in Figure 1 in the rebuttal PDF. We can observe that Resnet18 tends to be overconfident and Swin tends to be underconfidence, and the top-label ECE of Resnet18 and Swin are similar. When we use TS to calibrate Resnet18, the optimal $\tau\approx 1.07$ in Resnet18 is larger than 1, which reduces overconfidence. When we apply TS to calibrate Swin, the optimal $\tau \approx 0.89$ is small than 1, which results in overconfidence in calibrated results. We may expect that the performance on Swin should improve since underconfidence is reduced. However, overconfidence is generally more dangerous for calibration metrics. For example, a predictor that always predicts a uniform distribution among all the classes will be calibrated, which corresponds to $\tau=\infty$. Generally, using a larger $\tau >1$ will push the prediction towards uniform prediction and will be more likely to reduce calibration error (regardless of overfitting). On the contrary, a $\tau<1$ that is smaller than the ground-truth $\tau^*$ may harm calibration metrics significantly which makes it even worse than uncalibrated. The main reason that ETS works much better on Swin is that ETS introduced a uniform component that can enlarge the (effective) value of$\tau$, which can reduce the potential of underestimating $\tau$ significantly.
Summary: In this paper, the authors address the model calibration problem and propose a generalized definition of calibration error called Partitioned Calibration Error (PCE). Previous calibration methods mainly bin the data by the prediction probabilities, while PCE utilizes groups and partition functions to partition the data and requires the model to be calibrated on all partitions. The proposed calibration method jointly learns the partition function and scaling parameters to obtain the optimal calibration error. Experiment results show that the proposed method achieves significant performance improvements across multiple datasets and network architectures. Strengths: 1, The proposed Partition Calibration Error provides a generalized framework for calibration error evaluation. It bridges the gap between overall accuracy and point accuracy by data partitioning. Besides, previous evaluation methods like ECE/class-wise ECE are also included in PCE, which proves that PCE is a powerful framework. 2, Based on PCE, the proposed partition calibration method is reasonable and effective. Benefiting from the generalization ability of PCE, group calibration can be applied to other calibration methods easily. The experiment results also show that group calibration can improve the performance of temperature scaling. 3, The presentation of the paper is well organized. The PCE and PC+TS are clearly introduced and important ablations are provided. Weaknesses: 1, For the grouping function, it is parameterized as a set of weights and biases, which is claimed as semantic-aware. However, there might be a lot of ways to partition a set of data, such as deep feature-based clustering. The author may consider providing the reason why they choose to model the partition function as in equation (13). 2, In the experiment part, the paper only provides the results of partition calibration + (ensemble) temperature scaling. It would be helpful to test more calibration methods with partition calibration, which will demonstrate the generalization ability of partition calibration. 3, In the ablation part, the explanation of Figure 3 (a) is not sufficient. When the group number increases, the NLL and ECE also increase. The authors ascribe it to overfitting, but it might be alleviated by the proposed regularization. It would be beneficial to provide a more reasonable and insightful explanation for this result. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the weaknesses part. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The method may be limited by the computational complexity when the partition or group number is large. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: I express gratitude for the valuable suggestions you have offered to elevate the quality of our manuscript. We shall duly incorporate the corresponding modifications into the article. **Q1**: For the grouping function, it is parameterized as a set of weights and biases, which is claimed as semantic-aware. However, there might be a lot of ways to partition a set of data, such as deep feature-based clustering. The author may consider providing the reason why they choose to model the partition function as in equation (13). **A1**: Initially, we attempted to employ k-means as the partitioning function generation approach; however, its performance showed a decline compared to TS and ETS. We attribute this observation to two potential reasons. Firstly, clustering methods inherently optimize unsupervised objectives, resulting in partitions that may not enhance performance in terms of classification and calibration metrics. In contrast, our method learns the partitioning function by utilizing log-likelihood and optimizing with validation set labels, leading to partitions directly beneficial for calibration. Secondly, during experimentation, we noticed that although k-means can generate different clustering results by setting random initial cluster centers, the final outcomes often exhibit high similarity, diminishing the performance gain from increasing the number of partitions (ensemble learning necessitates diverse predictive results). Our approach, however, introduces superior diversity. **Q2**: In the experiment part, the paper only provides the results of partition calibration + (ensemble) temperature scaling. It would be helpful to test more calibration methods with partition calibration, which will demonstrate the generalization ability of partition calibration. **A2**: The primary strength of our approach lies in enhancing the performance of accuracy-preserving calibration methods. We also conducted experiments with IRM (non-accuracy-preserving) and observed that GC+IRM did not yield performance improvements compared to IRM alone. We posit that the diminished performance of GC on IRM is primarily attributed to IRM's necessity to learn a monotonic function across all samples. With grouping, IRM can only ensure monotonicity within each group, relinquishing this property across distinct groups, which can substantially undermine its performance. Conversely, TS and ETS do not hinge on the monotonicity of predicted values, allowing them to attain better performance after grouping. Additionally, other methods were not accuracy-preserving and did not demonstrate any advantages in calibration metrics compared to TS and ETS, thus we did not explore them further. We acknowledge that performance improvements in the non-accuracy-preserving methods are indeed meaningful, and we intend to explore this avenue in future research. **Q3**: In the ablation part, the explanation of Figure 3 (a) is not sufficient. When the group number increases, the NLL and ECE also increase. The authors ascribe it to overfitting, but it might be alleviated by the proposed regularization. It would be beneficial to provide a more reasonable and insightful explanation for this result. **A3**: Apologies for any confusion. The concept of overfitting we mentioned here differs slightly from the general sense. In typical problems, overfitting occurs when the model's capacity is too strong, and regularization can mitigate such issues. However, in our specific problem, increasing the number of groups leads to a decrease in the number of samples within each group. As each group utilizes different model parameters, it also effectively enhances the model's capacity. While employing stronger regularization might weaken the model's capacity, we experimentally found that it is insufficient to counterbalance the negative impact caused by reduced data, resulting in models on each group tending to overfit the data within their respective groups and consequently reducing the overall generalization performance.
Rebuttal 1: Rebuttal: The support information of A6 and A7 to Reviewer yhZK (performance on class-wise ECE, and the reason of performance drop of TS in SWIN model), and performance measured by NLL are in the PDF file. Pdf: /pdf/400bbc2b6179be0ceadf59c20070c54e1a440072.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a generic form of a calibration error metric. Partitioned Calibration Error (PCE) evaluates calibration errors across the group of partitions to alleviate data uncertainty. Furthermore, the paper presents experimental results on CIFAR-10, CIFAR-100, and ImageNet using several backbone models. Interestingly, the learned partitions make visually similar groups. Strengths: - Paper is easy to follow. - The authors propose a unified framework to evaluate confidence calibration error. Weaknesses: - The confidence calibration performance is on par with ETS. - It would be more informative if the authors presented effects of poor partition functions for confidence calibration. - Only vision applications are considered in this paper. - The meaning of semantically relevance is somewhat vague. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - As the learned partitions produce visually similar groups, are the features in their space proximity to each other? - Will the performance be negatively affected if the partition function creates groups without any visual similarity? - Does selection of hold-out data affect the performance? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: I extend my gratitude for the queries and suggestions you've raised. We will duly enhance the relevant sections in the revised version of the article. **Q1**: The confidence calibration performance is on par with ETS. **A1**: *Our method has achieved statistically significant performance improvements in both TS and ETS*. As we primarily focus on accuracy-preserving calibration and have limited data for calibration, achieving perfect generalization calibration performance becomes challenging. With the development of calibration methods, the room for improvement in the ECE metric becomes increasingly limited, making further enhancements more difficult. In this context, our method has demonstrated statistically significant performance (with $p < 0.01$) improvements compared to baseline methods on multiple different datasets and models. Our approach not only enhances the ECE metric but also improves proper score rules, such as log-likelihood (Table 2 in rebuttal PDF), indicating that it not only improves model calibration (epistemic uncertainty) performance but also accuracy performance concerning aleatoric uncertainty. Additionally, our method significantly improves calibration performance without compromising accuracy. **Q2**: It would be more informative if the authors presented effects of poor partition functions for confidence calibration. **A2**: This is an excellent question. *In our paper, we discuss the roles of two extreme partitioning functions and connect them using PCE*. Specifically, if we group all samples into one group, the model will predict the prior distribution for all samples, resulting in a high classification error rate but nearly perfect calibration error. On the other hand, if we use a one-to-one mapping as the partitioning function, each sample will be mapped to a different group, leading to our optimization objective degenerating into a standard log-likelihood, which is equivalent to direct fine-tuning and resulting in significant miscalibration. Efficient calibration methods proposed in prior research lie between these two extreme cases, allowing for improvements in calibration metrics. For general partitioning functions, it is challenging to intuitively evaluate which one is better. For instance, we have also attempted to use sample features for clustering (see A1 to Reviewer BrmR), but our experimental results indicate that using k-means clustering as the partitioning function leads to a decrease in calibration performance (compared to TS and ETS). This suggests that the clusters generated by the clustering may not serve as effective partitioning functions. **Q3**: Only vision applications are considered in this paper. **A3**: *We conducted experiments on multiple image datasets and network architectures*. We selected CIFAR10, CIFAR100, and Imagenet as the datasets, along with various network structures to cover different accuracy scenarios. For each combination of dataset and network structure, we performed 100 experiments and conducted significance testing. Exploring the calibration performance on other tasks and models is a future direction of our research. **Q4**: The meaning of semantically relevance is somewhat vague. **A4**: *In this article, our core idea is that using only probability information to construct the partitioning function is insufficient to capture information beyond the predefined classes*. However, if the partitioning function can be defined over $x$, we can learn it based on other information apart from class probabilities. In this context, semantic information emphasizes utilizing information from $x$ (beyond class probabilities) to construct the partitioning function. We acknowledge that this might lead to some misunderstanding, as class probabilities themselves can also be considered as a form of semantic information. Therefore, we will modify the wording of this part in the paper. **Q5**: As the learned partitions produce visually similar groups, are the features in their space proximity to each other? **A5**: Indeed, we added a linear layer after the features to predict the corresponding group. This can be seen as a linear partitioning of the sample space, where each group's sample features are closely located in the space. Due to the randomness in the samples and network initialization parameters, optimizing the objective function multiple times results in different partitions, thereby achieving ensemble diversity and enhancing calibration performance. **Q6**: Will the performance be negatively affected if the partition function creates groups without any visual similarity? **A6**: As mentioned in A5, the partitions generated by our method are always close in the feature space. However, this does not imply that visually similar partitioning methods will necessarily improve performance. For example, clustering methods may produce visually similar partitions, but they might lead to worse calibration results. On the other hand, visually dissimilar partitions may not necessarily result in worse performance. For instance, if the model is overly optimistic in both class "a" and class "b" (where "a" and "b" are visually unrelated), grouping them together could help mitigate overall miscalibration issues. Thus, whether a partitioning function can improve calibration performance may be related to whether there is consistent overconfidence or underconfidence within each partition. **Q7**: Does selection of hold-out data affect the performance? **A7**: Indeed, the hold-out (HO) data significantly influences calibration performance. To ensure that the performance differences between different methods are not caused by the dataset, we randomly split the HO data and test data 100 times for each dataset-model pair. This approach allows us to conduct paired t-tests to evaluate the significance of the performance differences.
null
null
null
null
null
null
PERFOGRAPH: A Numerical Aware Program Graph Representation for Performance Optimization and Program Analysis
Accept (poster)
Summary: The paper proposed a novel graph-based program representation, PerfoGraph, which is based on the current state-of-the-art method PrograML and aims to address its limitation by providing numerical awareness, introducing new tactics for handling local variables, and supporting aggregate data types. By conducting experiments on various performance-optimizing oriented downstream tasks, it is shown that the proposed PerfoGraph representation is more effective compared to the existing methods. Strengths: - The paper was clearly presented overall. - The paper showed great originality in proposing a novel program representation method, PerfoGraph, that enhances previous results by providing numerical awareness, introducing new tactics for handling local variables, and supporting aggregate data types. - Regarding numerical awareness, the authors introduced a novel idea to split the numbers into digits and positions before encoding, which could effectively represent numbers without a huge vocabulary. - Regarding local variables, the author took good consideration of GNN's mechanism and adjusted the graph representation accordingly. Weaknesses: - Unconvincing experiment setup in comparison with PrograML. In the original PrograML paper, MPNN was introduced as an encoder. However, the proposed PerfoGraph chose RGCN for the same role. The reviewer believes that it is possible to also use RGCN as PrograML's encoder. Thus, it is unclear whether the performance improvements in the comparisons with PorgraML stem from the novel graph representation or from a more suitable encoder. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. The authors claimed that the proposed PerfoGraph representation could better capture **composite** data structure information. However, the paper only introduced a novel method for incorporating list-like data structures, such as **arrays and vectors**, into the graph representation of the program. The reviewer's question is how PerfoGraph could support more general composite data types such as struct. From the current presentation, the reviewer's opinion is that the term **aggregate data types** suit better. 2. How does the choice of aggregation function impact the performance of PerfoGraph? Is it possible that the information compression introduced in the aggregation function confuses the deep learning model? 3. Do the authors consider moving the part of the ablation study from Supplementary Materials to the main text to better clarify the importance of numerical awareness in program representations? 4. Possible error in Figure 2a: In section 4.1, the authors used Figure 2a as an example of the PrograML representation to point out that PrograML assigns a position feature to the address edge of the store node. However, there is no corresponding label near the first store node. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors have addressed the limitations of their work in Section 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - **(W1) Unconvincing experiment setup in comparison with PrograML:**\ In the downstream tasks of sections 5.3, 5.4, 5.5, and 5.6, we used the RGCN encoder with ProGraML and compared it with PerfoGraph. For each of these downstream tasks, we used an RGCN model with the same architecture as described in Table 1 of Supplementary Materials both for PerfoGraph and ProGraML so that we can be sure that the differences in results are only because of changing the program representation. And as mentioned in the paper, results show that *PerfoGraph* representation performs better than *ProGraML* in almost all these tasks. - **(Q1) Aggregate data types:**\ You are right. *PerfoGraph*, at the moment, only supports aggregate data types. Therefore, we will update the paper and will mention aggregate data types instead of composite data types. As explained to the question of inFa reviewer, *PerfoGraph* does not handle structs in a specific way different from *ProGraML*. In fact, *PerfoGraph* shows struct through a series of nodes. For example, a simple struct such as ``` struct { int myNum; int myAge; } myInfo; myInfo.myNum = 815; myInfo.myAge = 23; ``` In LLVM IR is: ``` %2 = alloca %struct.anon, align 4 %3 = getelementptr inbounds %struct.anon, %struct.anon* %2, i32 0, i32 0 store i32 815, i32* %3, align 4 %4 = getelementptr inbounds %struct.anon, %struct.anon* %2, i32 0, i32 1 store i32 23, i32* %4, align 4 ``` *PerfoGraph* will present these instructions by having\ Control nodes as: `alloca`, `getelementptr`, `store`, `getelementptr`, `store`\ Data nodes as: `%struct`, `i32 0`, `i32 1`, `i32 23`, `i32 815`\ Which is basically the same way that *ProGraML* handles struct. We will update the paper and use the term **aggregate data types** to avoid confusion. - **(Q2) Does the choice of aggregation function impact the performance of PerfoGraph?**\ We do observe some impact of aggregation function in the `GraphConv` layer of our GNN pipeline. We found `sum` aggregation provides the best performance overall. We did an experiment with the Device Mapping task using different aggregation functions. | Device | Sum agr. accuracy | Max agr. accuracy | Mean agr. accuracy | Min agr. accuracy | |--------|------|------|------|------| | AMD | 0.94 | 0.90 | 0.93 | 0.79 | | NVIDIA | 0.90 | 0.85 | 0.90 | 0.84 | - **(Q3) Moving Ablations study to the main paper:**\ As discussed in global repose, the ablation study was moved to the supplementary material file due to the lack of space in the paper. But for the final version of the paper, we will make sure to include the ablation study as requested. Thank you for pointing this out. - **(Q4) Possible error in Figure 2a:**\ Thanks for pointing this out. The label is there but it is overlapping with the edge itself that's why it is difficult to see. We will fix it in the revised version of the paper. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. It has further clarified my understanding of the proposed work. Given that I've previously given a 7-accept based on the novelty and originality of the proposed paper, I've chosen to keep my score unchanged.
Summary: The paper presents a graph-based representation for LLVM IR programs for processing with graph neural networks. The representation is based on ProGraML, an established technique for such graph representations, but adds several features to the graph: collapsing nodes that refer to the same variable, adding additional edges between memory stores and the variables they can affect, explicitly representing constants in the graph, and representing compositional data types (e.g., arrays of arrays) as multiple distinct nodes rather than single nodes. The authors evaluate across a range of LLVM IR learning tasks, and show that Perfograph outperforms ProGraML (and all other baselines in most contexts). Strengths: * Finding and understanding good program representations is an important problem in the field of learning for code * The proposed approach identifies a novel set of features which qualitatively seem important, and which are validated to be useful in the ablation study * Assuming that the evaluation is indeed fair (see weaknesses below), the proposed approach significantly outperforms ProGraML, achieving a large increase in accuracy (seemingly solving many of these tasks near perfectly). Weaknesses: * Fairness of comparison against baselines: * Perfograph is a version of ProGraML with some added and removed nodes and edges. Each added node/edge adds additional power for a graph neural network. For a fair comparison against baselines (ProGraML in particular), the authors must show that for a given compute budget (i.e., FLOPs or wall clock time), Perfograph outperforms ProGraML; this would involve either scaling up/down ProGraML or Perfograph. * Are the datasets used exactly the same as in prior work? For example, Section 5.4 reports using an 80/20 train/test split, which I cannot find discussed in [10] * Line 372 mentions that the authors "could not reproduce the results in PROGRAML paper" for the algorithm classification task. Does this lack of reproducibility also affect the evaluated benchmarks that were not in the ProGraML paper (seemingly, everything other than 5.2)? If so, this is a major point that must be discussed at a high level in the paper. * Given the advent of large-context Transformers, I would be curious how well such a model performs when applied to the raw IR (i.e., without an explicitly featurized graph representation). However, since this paper is explicitly building on the prior work of ProGraML, I do not consider this to be a condition for acceptance. * Overall, the clarity of the submission is moderate. * Section 4.1: without a deep understanding of the ProGraML representation, it is very hard to understand these graphs (and therefore the issues that the proposed approach fixes in them). Perhaps highlighting or otherwise identifying the specific nodes discussed could help. * Line 323: is this an arithmetic or geometric average? * Line 368: what are the units of these errors? * Several claims about novelty or related work are misleading: * Lines 72-73: "this token-based representation... fails to capture..." This claim is misleading. Sequence processing models are entirely capable of learning such graph-structured relationships; such complex relationships are also not unique to code (e.g., natural language has pronouns, nested structures, etc.) * Line 176: the proposed "Digit Embedding" is very similar to some digit representations approaches used in Transformers (for example, Geva et al. 2020) which embed each digit, add/concatenate with a positional encoding, and aggregate with attention. * I believe the paper would be significantly stronger with the ablation studies moved to the main body, but this is a stylistic choice and is not a factor in my score * If I'm understanding Appendix 2 correctly, the Perfograph GNN is evaluated with only 2 layers; ProGraML is evaluated with 6 (Cummins et al., Section 6.2). How was this parameter chosen? What is the average diameter of the evaluated graphs (for both Perfograph's graphs, and ProGraML's graphs)? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: * Please see the questions in "clarity" in the discussion of weaknesses above (and other questions throuhgout). * Does Perfograph use more compute or have more capacity than ProGraML? Does it still outperform ProGraML when given equal power? * What steps have the authors taken to ensure that the implementation of ProGraML compared against in Sections 5.3-5.7 is correct (since as noted in Line 372, in one context the authors "could not reproduce the results in PROGRAML paper")? * Do all other baselines use the exact same training/test set split? * How were hyperparameters tuned for Perfograph? I would be willing to raise my score given evidence that for the same compute budget, Perfograph still outperforms ProGraML (or evidence that my assumption here, that Perfograph uses more compute than ProGraML, is incorrect). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The paper does not explicitly discuss limitations of the technique. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - **(W1.a, Q2) The computation cost PerfoGraph:**\ We conducted an experiment in terms of the time it takes to train the GNN model using *PerfoGraph* versus *ProGraML*. We found out that *PerfoGraph* takes less training time than *ProGraML*, and *PerfoGraph* also has better performance than *ProGraML*. For more details on this experiment, please refer to the global response. Thank you. - **(W1.c, Q3) Lack of reproducibility of ProGraML for algorithm classification:**\ Apologies for any ambiguity. As detailed in Section 5.7, we assessed the algorithm classification performance using both *ProGraML* and *PerfoGraph*. However, our replication did not mirror the error rate mentioned in the *ProGraML* paper. We observed an error rate of 6%, whereas the *ProGraML* paper cited 3.38%. This minor discrepancy in error rate could potentially be rectified with further fine-tuning of the GNN model tailored for *ProGraML*. Nonetheless, the *ProGraML* repository did not provide any checkpoints. For this paper, we initialized the GNN models with identical settings and gauged the performance of *ProGraML*. In the interest of fairness, we have presented both the error rate from the *ProGraML* paper and our results.\ For the Device Mapping task (section 5.2), we were able to reproduce the same results as reported in the *ProGraML* paper. So, we compared *PerfoGraph* with the accuracy mentioned in the *ProGraML* paper.\ Other downstream tasks (sections 5.3, 5.4, 5.5, 5.6) are not described in the *ProGraML* paper. For a fair comparison of *PerfoGraph* with *ProGraML* for these downstream tasks, we keep everything the same in our pipeline except for the program representation. - **(W2.a) The difference with ProGraML in Figure 2:**\ We will make sure to highlight those nodes and edges in Figure 2 for better visibility. Some enhancements of PerfoGraph are shown in the global response pdf file. - **(W2.b) Type of average on line 323 (Arithmetic of Geo Mean?):**\ This is an arithmetic average. We ensure to clarify this in the paper. - **(W2.c) Unit of errors on line 368:**\ We represent the error rates as percentages. For example, inst2vec has an error rate of 5.17 means inst2vec has an accuracy of 94.83%. We will include the percentage sign in the revised submission to avoid confusion. - **(W3.a) The claim of token-based representation fails to capture relations is misleading:**\ Sorry for the confusion. While there have been studies that incorporate relations in token-based representations, these often necessitate specialized models, tailored representations, or additional training data and time to effectively capture relation information. In contrast, previous works (Zhang, Jian, et al. 2019, and Chen, Le, et al. 2023) have shown that graph representations of code inherently encapsulate critical relational data more naturally than their token-based counterparts. Using compiler-generated IR, *PerfoGraph* ensures that relational information is both accurate and precise, making it an efficient representation for models to capture code features. We will adjust our description of token-based representation to ensure clarity. - **(W3.b) The Digit Embedding is similar Geva et al:**\ Thank you for pointing this out. The idea of digit embedding was actually inspired by input and position embeddings in Transformers. We applied the position encoding to each one of the digits of a number. The idea can be considered similar to character encoding with assigning a position to each character. However, to the best of our knowledge, we did not find any tool or model that can do this type of embedding for numbers. Hence we proposed Digit Embedding. - **(W4) Moving the ablation to the main paper:**\ Thank you for mentioning this point. As mentioned in the global response, we will make sure to include the ablation study in the paper. - **(W5) What is the diameter of the graphs:**\ We measured the average diameter of the 30k training IR files in the Parallelism Discovery task as an example. Below are the results:\ Avg diameter of PerfoGraph: 28.3\ Avg diameter of ProGraML: 31.54 - **(W1.b, Q4) Is Train/Test Split in other baselines the same?**\ Yes, the datasets are used exactly the same as in prior works.\ Sec 5.2: Same as *ProGraML* paper. We do 80% training, 10% validation, and 10% testing\ Sec 5.3: Same as *Graph2Par* paper. Around 30k for training and testing on three subsets *Pluto* (4032 IR files), *autoPar* (3356 IR files), and *DiscoPoP* (1226 IR files)\ Sec 5.4: *Graph2Par* paper reproduced the *Pragformer* work to show the comparisons, and in the *Pragformer* repository, they used 80% and 20% split. So we used the same 80% training and 20% testing split ratio\ Sec 5.5: Same as in the study of TehraniJamsaz et al., 10-fold cross-validation\ Sec 5.6: Same as in the *inst2vec* paper, leave one out cross-validation\ Sec 5.7: Same as *ProGraML* paper. 240k IR for training and 10k IR file for testing. - **(W5,Q5) How are hyperparameters tuned?**\ We have tuned the hyperparameters of the model experimentally. We did experiment with a higher number of layers; however, it did not bring any benefits, despite the higher computation cost. Below we show the comparison of accuracies among different layers of our GNN-based model for the task of Device Mapping. | Device | 2 layers accuracy | 3 layers accuracy | 4 layers accuracy | 5 layers accuracy | 6 layers accuracy| |--------|------|------|------|------|------| | AMD | 0.94 | 0.88 | 0.91 | 0.87 | 0.90 | | NVIDIA | 0.90 | 0.85 | 0.90 | 0.90 | 0.88 | We use Adam optimizer as it is the default configuration for DGL-based RGCN and is widely used (e.g., in the ProGraML paper). We use Relu activation (default of DGL-RGCN) as it is also widely used. We experimented with commonly used hidden layer sizes (32, 48, and 64) and learning rates (0.1, 0.01, and 0.001) and observed best results are obtained with hidden layer size 64 and learning rate 0.01.
Summary: This work proposes Perfograph, a program graph representation based on LLVM-IR and an extension to ProGraML. This graph representation is designed for the purpose of performance optimization and program analysis applications based on graph neural networks (GNN). This work made three contributions to the existing representation. First, it tracks reused local identifiers and memory locations. Second, it uses decimal-based encoding to embed numerical constants into the graph. Last, it breaks down array and vector types into multiple nodes. Perfograph is tested on 6 downstream tasks: device mapping, parallelism discovery, parallel pattern detection, NUMA and prefetchers configuration prediction, thread coarsening factor (TCF) prediction, and algorithm classification. In the device mapping challenge, the new model outperforms ProGraML by 7.4% on the AMD dataset and 10% on the NVIDIA dataset. Strengths: 1. This paper is well organized. 2. This work clearly listed all 3 of its design decisions. 3. This work contains detailed evaluations on multiple targets. 4. Each evaluation contains a carefully described problem definition, dataset, and results. Weaknesses: 1. The improvement of this work over ProGraML seems limited. For example, the 3 main design contributions are not fundamentally new. They are more like minor tweaks on the existing ProGraML system. 2. The evaluation section does not show how each design choice contributes to the overall improvement. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. It claims that “Perfograph is built on top of ProGraML” in Section 4, but also claims “A **new** compiler and language agnostic program representation based on LLVM-IR”. Is this work an extension of ProGraML, or is it a new system? This is somewhat unclear. 2. Section 4.1 claims that this work combines multiple writes to the same variable into one node. However, LLVM deliberately separates multiple writes for the ease of certain aliasing analyses, the opportunities of reordering, and optimizing for parallelism. It would be helpful to provide more evidence that this combined representation outperforms separated representation in all (or most) downstream tasks. 3. Section 4.2 mentions that decimal-based encoding is used for numerical constant embedding. This seems counterintuitive since computers natively speak binary. 4. Section 4.3 claims that Perfograph supports composite data types (but only covers arrays and vectors). It would be helpful to further explain how Perfograph deals with programs that use structs (and maybe unions). Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - **(W1) Differences and Contributions that differentiate PerfoGraph from ProGraML:**\ While we acknowledge the similarities between *PerfoGraph* and *ProGraML*, as both represent programs as graphs using LLVM Intermediate Representations, *PerfoGraph* differs itself from *ProGraML* by:\ A) A more precise representation of local variables\ B) Incorporating and embedding numbers\ C) Supporting aggregate data types.\ Moreover, PerfoGraphs enhancements are not dependent on ProGraML. For instance, numerical embedding and our aggregate data types representation can easily be incorporated into other graph representations. - **(W2) Contribution of each design choice:**\ Thank you for mentioning this. To provide insights into the contribution of each design choice, we have conducted an ablation study. For instance, for the Device Mapping dataset removing the composite data types increased the error rate by 13% for the AMD dataset. To see all the details on the ablation study, please refer to the global response and the supplementary materials file. Thank you. - **(Q1) Is this work an extension of ProGraML, or is it a new system?**\ We drew inspiration from *ProGraML* but observed certain limitations in its design. Specifically, *ProGraML* sometimes fails to capture some essential information in its representation, notably numbers, composite data type nodes, and local identifiers. In *PerfoGraph*, we prioritized refining the representation to address these shortcomings. Given that ProGraML integrates foundational graphs like CFG and DFG when using IR as the basic element for graph construction, it is challenging to bypass it entirely. Our approach in *PerfoGraph* sets itself apart by offering novel digital embedding for numbers and tailored representations for data type nodes and local identifiers. - **(Q2) Combining multiple writes to the same variable:**\ You are indeed right that LLVM separates the multiple writes to the same variable for the sake of simplifying optimization choices. In our representation, we do not combine multiple writes to the same node. Sorry for the confusion. We make sure to clarify this in the paper. For the example in Figure 2a in the paper, there are two `store` instructions. The first `store` instruction corresponds to assigning 0 to the variable *i*, and the second `store` instruction corresponds to increasing the value of *i* by 1. In LLVM IR, this is shown as follows: ``` store i32 0, i32* %2, align 4 store i32 %4, i32* %2, align 4 ``` As we can see, the two `store` instructions are writing to the same local identifier, which is `%2`. However, ProGraML creates a separate node to present `%2` in each `store` instruction, as we can see in Figure 2a. This will make it difficult for DL models to understand that those `store` instructions are writing to the same local variable. In contrast, *PerfoGraph* considers only one node to represent `%2`. For each temporary variable (`%2`, `%3`, …) that LLVM creates, *PerfoGraph* will also create separate nodes. The issue with ProGraML was that it was creating several nodes for the same temporary variable (Figure 2a). - **(Q3) Numerical Embedding is counterintuitive since computers speak binary:**\ We proposed Digit Embedding to enable DL/ML models to be aware of numerical values. That is, if, for example, the loop bound (which can be a number) is known at compile time, we want the DL model to understand the scale and value of the number without losing its generalizability, that is, facing unknown numbers at test time. Digit embedding will essentially help to generate embedding for numeric tokens, which are present in the nodes of the graph representation. This will help DL models to make better predictions. Moreover, we intend to make the DL model aware of the numerical values at compile time. Therefore, we assume at the LLVM IR phase, we are not dealing with binary code. - **(Q4) Support for struct:**\ Currently, *PerfoGraph* does not handle structs in a specific way different from *ProGraML*. In fact, *PerfoGraph* shows struct through a series of nodes. For example, a simple struct such as ``` struct { int myNum; int myAge; } myInfo; myInfo.myNum = 815; myInfo.myAge = 23; ``` In LLVM IR is: ``` %2 = alloca %struct.anon, align 4 %3 = getelementptr inbounds %struct.anon, %struct.anon* %2, i32 0, i32 0 store i32 815, i32* %3, align 4 %4 = getelementptr inbounds %struct.anon, %struct.anon* %2, i32 0, i32 1 store i32 23, i32* %4, align 4 ``` *PerfoGraph* will present these instructions by having\ Control nodes as: `alloca`, `getelementptr`, `store`, `getelementptr`, `store`\ Data nodes as: `%struct`, `i32 0`, `i32 1`, `i32 23`, `i32 815`\ Which is basically the same way that *ProGraML* handles struct. We will update the paper and use the term **aggregate data types** to avoid confusion, as suggested by another reviewer. Thank you for pointing this out. --- Rebuttal Comment 1.1: Comment: Many thanks for the authors' detailed response. The new performance data and technical details are very helpful for me to better understand this work. Although I still have some minor concerns about the novelty of this work due to the similarities between PerfoGraph and ProGraML, I would like to change my score to "Weak Accept" because I believe this work contains enough intellectual merits.
Summary: The research identifies limitations in the current state-of-the-art program representation PROGRAML, in capturing features of numerical values and composite data types. To address these limitations, this work introduces an enhanced GNN-based program representation for LLMV-IR with modifications to the nodes and edges of the program graphs to capture features of numerical values and composite data structures. It also presents a digit embedding approach to enhance numerical awareness within the representation. The evaluation shows that PERFOGRAPH reduced the prediction error rate by 7.4% (AMD dataset) and 10% (NVIDIA dataset) in the Device Mapping challenge compared to PROGRAML. Additionally, it outperforms PROGRAML and traditional rule-based methods in tasks such as determining if a loop is parallelizable, classifying the parallel patterns, and configuring NUMA and prefetchers selections. Strengths: 1. PERFOGRAPH employs effective graph modifications to enhance the learnability of the LLVM-IR graph for GNNs. These methods are well-reasoned and provide valuable domain information for improving program graph abstractions in learning. For instance, PERFOGRAPH unifies the graph nodes for the same local identifier variable in the program graph. It also modifies the LLVM IR graph to add edges from the store instruction to the variables it modified. These modifications all simplify the complex structure of the LLVM-IR graph. 2. PERFOGRAPH also distinguishes between composite data structures and regular data types, which prior approaches did not adequately address. This distinction is crucial because operations involving composite data structures can differ significantly from those involving regular data types. 3. The evaluation of PERFOGRAPH is comprehensive, considering six performance-oriented tasks. The results demonstrate that PERFOGRAPH outperforms PROGRAML in terms of accuracy on these tasks. More importantly, in parallelism discovery tasks, PERFOGRAPH achieves higher accuracy compared to traditional rule-based approaches. Weaknesses: 1. It is unclear to me how the digit embedding ensures that unknown numbers are not encountered during the test phase and what specific information is encoded through the addition of digit and position embeddings. 2. The reasons behind representing multi-dimensional arrays as multiple nodes are not adequately explained. It remains unclear how this representation enhances expressiveness or facilitates reasoning by GNNs. 3. An ablation study that isolates the individual benefits of LLVM-IR graph modification, digit embedding, and support for composite types is absent. Such a study would provide valuable insights into understanding the specific advantages offered by each component. 4. PERFOGRAPH may have higher computational requirements compared to PROGRAML, potentially posing challenges for its practical deployment. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. It would be helpful to visualize the digit embedding values of different numerical values to facilitate better understanding. 2. It would be beneficial to compare the cascaded context approach for representing multi-node representation with alternative representations (e.g. storing only the size of one rank) to assess the effectiveness in encoding array information. 3. To provide a more comprehensive analysis, I recommend conducting an ablation study for each major representation enhancement to understand the specific benefits of each component. 4. It would be valuable to measure and compare the inference time of PERFOGRAPH with PROGRAML and other rule-based analysis methods to evaluate the computational efficiency. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - **(W1) Unknown numbers during the testing phase:**\ Here, by unknown numbers, we mean numeric tokens not encountered by the model during the training phase but present in the testing phase. Digit Embedding allows us to generate embedding for those unknown numbers. Because it breaks the numbers into digits, which are from 0 to 9 only. Then we get the numeric position of each of the digits. Then embedding is generated by combining each digit with its numeric position. This way, we can generate embedding for unknown numeric tokens. The most important feature of Digit Embedding is that it uses digits and their positions to generate embedding rather than the number itself, and as we know, digits are from 0 to 9. - **(W2, W3, Q1, Q2, Q3) Ablation Study, the Benefits of representing composite data types and visualization of Digit Embedding values:**\ We have conducted an ablation study to have more insights on the contribution of each one of the enhancements that *PerfoGraph* brings; for example, in terms of composite data types, removing them increases the error rate of the AMD dataset by %13 for the device mapping dataset. Please refer to the global response and supplementary material file for more details. We have also included the visualizations of Digit Embedding values in the supplementary material file. - **(W4, Q4) The computation cost of PerfoGraph:**\ Thank you for pointing this out. As mentioned in the global response, our observations indicate that *PerfoGraph* does not impose significant overhead when training DL models. In fact, we observed a slightly lower overhead with *PerfoGraph*. Please refer to the global response for more details regarding the computation time experiments. --- Rebuttal 2: Comment: Thank the authors for the response. I found the supplementary material answered most of my questions. The ablation study indicates that digital embedding has a lesser impact on accuracy (1%) compared to the composite data type (>5%). The title is a bit misleading in this case as it emphasizes more on numerical awareness. I'm still not fully convinced that digital embedding is effective in capturing the magnitude of numerical values. One observation is that it seems it will cluster numbers with the same digits together (e.g. the point of number 50 is very close to the numbers in [50000-50090] in Fig.2 of the supplementary material. I tend to keep my score unchanged. --- Rebuttal Comment 2.1: Comment: Thank you for sharing your feedback. We acknowledge that a slightly reduced impact of Digit Embedding is observed on the Device Mapping dataset, However as this data set is relatively small, we conducted a further experiment on the DiscoPoP subset of the Parallelism Discovery task by eliminating Digit Embedding. We found our accuracy dropped to **95.26% (3.74% decrease)** with Digit Embedding removed from that experiment. From this observation, we believe it is fair to say that the role of Digit Embedding can be more prominent on more extensive datasets.
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their constructive feedback and comments. Hereby, we address some of the common concerns. - **Moving the Ablation Study to the main paper:**\ Due to the lack of space in the paper, we currently have the ablation study results in the Supplementary file. We will make sure to move and include the ablation study in the main paper. Thank you. - **Ablation Study Results:**\ We have conducted experiments on the contribution of each design choice and presented the results as Ablation Study in the Supplementary Materials file. For example, for the device mapping problem, removing composite data type nodes increases the error rate to %13 for the AMD dataset and 15% for the NVIDIA dataset. The supplementary material file contains more information about the Digit Embedding, Architecture of the models, Ablations study, and dataset. - **Computation cost of PerfoGraph:**\ Thank you for pointing this out. We measured the computation cost of *PerfoGraph* against *ProGraML*. For this experiment, we took the *OMP_Serial* dataset of the Parallelism Discovery task, which contains 30k IR files for training. The hardware configuration is the same as reported in section 5.1. We train our GNN model for 100 epochs with both *ProGraML* and *PerfoGraph* five times and report the average training time as follows:\ *PerfoGraph: 14 minutes and 59 seconds.*\ *ProGraML: 15 minutes and 56 seconds.*\ Then we measured the testing time with both *ProGraML* and *PerfoGraph* representation for the *Pluto* subset of *OMP_Serial* dataset as it is the largest subset containing 4032 test IR files. For measuring testing time, we again measured the testing time five times and reported the average testing time.\ The average testing time results for the *Pluto* subset are as follows:\ *PerfoGraph: 881 milliseconds*\ *ProGraML: 1231 milliseconds*\ And as already shown in the paper, for the *Pluto* subset, *PerfoGraph* has higher accuracy **(91%)** than *ProGraML* **(89%)**. Moreover, we measured the average diameter of *PerfoGraph* and *ProGraML* for the 30k IR files in the Parallelism Discovery training dataset:\ *Average diameter of PerfoGraph: 28.3*\ *Average diameter of ProGraML: 31.54*\ Based on these observations, we believe it is fair to say that the computational overhead of *PerfoGraph* is slightly less than *ProGraML*, while *PerfoGraph* has shown better performance, as discussed in the paper. Pdf: /pdf/b093100da2cb71e0a35cfc15f8c762d92a0e8f16.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Learning Interpretable Low-dimensional Representation via Physical Symmetry
Accept (poster)
Summary: This paper presents an approach to interpretable representation learning based on ''physical symmetry''. The core idea is to learn to predict the temporal evolution of a latent variable which is additionally encouraged to be equivariant under some transformation (e.g. translation or rotation). An autoencoder-like model is designed, where an input sequence is encoded to a sequence of latent variables. From these, three objectives are defined: direct reconstruction, reconstruction after predicting the next state, and an objective involving the symmetric transform. Experiments are presented on an audio-related task (predicting pitches from a sequence of single-note input spectrograms) and a vision task (predicting the coordinates of a moving ball). Further experiments are performed to demonstrate a reduced need for training examples, interpretability of the learned representations, and style-content disentanglement. Strengths: The main idea at the core of this work is interesting and original. This idea might also stimulate future work using such physical symmetry constraints. The paper provides a variety of experiments, which seem sensibly designed. It is welcome to see experiments dedicated to two different domains (vision and audio). I especially liked the experiments on style-content disentanglement in the Appendix (SPS+). These contain much more realistic scenarios than the main text (albeit still on synthetic datasets). The writing in the later sections of the paper is mostly clear and the figures throughout the manuscript are well done. More on the writing below. The proposed approach shows only slight or moderate improvements compared to using no physical symmetry constraints. Nevertheless, the idea itself could be considered significant enough to be shared with the community. Weaknesses: In my view, the paper suffers from three main weaknesses: - The writing in the initial sections of the paper (even the title!) is unnecessarily convoluted and confusing. Initially, I could not understand the main idea of the approach until reading the results sections and the appendix. - The experiments are performed on very constrained, synthetic datasets. The train-test split of the audio experiment, especially, seems very simple (the test set contains the same sequences as the training set, just very slightly detuned). For example, based on these experiments, I do not believe the proposed model would work for pitch estimation in-the-wild (whereas SPICE does, to some extent). - The results for the proposed approach show a slight or moderate improvement compared to the K=0 baseline. However, most of the observed effect seems to come from the sequence prediction and reconstruction objectives, and not from the proposed physical symmetry (see, e.g., Figure 3, K=0 vs K=4). This is somewhat disappointing. I also believe this fact is not openly discussed in the text. Some more comments regarding writing: - From the title, it is not clear that the approach is applied to the music or image domain. Neither title nor abstract clarify that the proposed approach relies on time-series data. - The abstract, intro and conclusion suggest that using (a) using physical symmetry for learning and (b) representation augmentation are two different contributions of the paper, when in fact, they refer to the same thing (i.e., the proposed idea). - The term ''prior model'' is used in the abstract, but is not properly explained. In the paragraph starting on line 37, this finally gets explained, but the exposition is again quite convoluted. This would be much easier to understand if the applications used for the experiments (pitch sequences/moving ball) were introduced right at the start. - The same goes for ''global invariant style code'', the meaning of which is completely unclear until one reads the appendix. Again, the discussion should be much more concrete and clearly mention what style and content mean in the different application scenarios. - Section 2 (Intuition) is actively unhelpful. Instead of a clean explanation (using example applications!), we are given pompous and unscientific prose such as ''we regard symmetry in physical law as a general inductive bias of the human mind'' (line 74). - Line 87 introduces the term ''linear pitch factor'', without clarifying the difference between that and simply the term ''pitch''. I suppose the term pitch is not used since the learned latent variables do not encode pitch on a natural (e.g. MIDI) scale, but this only becomes clear to the reader much later in the manuscript. - In lines 110f, the reader suddenly learns that the model is a variational autoencoder! This should have been made plain in Section 3.1, or even earlier. Overall, this is a borderline paper for me, with a tendency towards reject. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - How important is L_rec, relative to L_prior? That is, can we turn off L_rec and still get similar results? - In Equation 2 (and other equations), how do you use the binary cross entropy loss to compare spectrograms (or pictures)? This is very unusual, since spectrograms do not have a value range [0,1]. Is this applied in an element-wise fashion? - Line 122: ''In other words, we add or subtract the latent codes by a random scalar.'' -- Unclear. Do you mean you add a scalar *to* the latent code? - Line 137f: Is the choice of sequence (a scale upward and downward) relevant? Could you also choose a stationary sequence? Or a random sequence (transposed to different keys)? - Lines 286ff: I think the limited usefulness of the VAE compared to the AE stems from the fact that the datasets used are so simple. There are simply no sources of variability that would need to be modeled by the Gaussian prior. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The limitations of the synthetic experimental setups should be discussed more openly in the text. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the detailed feedback and for raising those important questions. In the below chain of comments, we respond in a breakdown format. > **How important is L_rec, relative to L_prior? That is, can we turn off L_rec and still get similar results?** **Reply 1**: No, we can’t. L_rec is critical in preventing representation collapse. Your intuition to ablate L_rec is astute in a sense that L_prior is the conceptually central training objective — it literally evaluates the *latent dynamics*. However, without L_rec, the latent representation will collapse, where all possible x is encoded to the same z. In the case of collapse, the prior model will trivially learn the identity function to achieve L_prior=0. Additionally, collapse is guaranteed almost everywhere in the random initialization space since L_prior is a contra**c**tive loss that pulls representations inwards. L_rec forces z to contain at least the information required to reconstruct x, preventing collapse. > **In Equation 2 (and other equations), how do you use the binary cross entropy loss to compare spectrograms (or pictures)? This is very unusual, since spectrograms do not have a value range [0,1]. Is this applied in an element-wise fashion?** **Reply 2**: We apply it in a pixel-wise fashion. A binary cross entropy (BCE) loss can be used because we normalise the energy value of spectrograms to the range [0, 1] in advance (paper line 144). In fact, we have done experiments comparing using BCE vs. using L_2 loss, and yielded no difference in performance. > **Line 122: ''In other words, we add or subtract the latent codes by a random scalar.'' -- Unclear. Do you mean you add a scalar *to* the latent code?** **Reply 3**: Yes, we mean “we add a random scalar to the latent code”. We will update the paper accordingly. > **Line 137f: Is the choice of sequence (a scale upward and downward) relevant? Could you also choose a stationary sequence? Or a random sequence (transposed to different keys)?** **Reply 4**: We should be able to use random sequences transposed to different keys to train. In contrast, stationary sequences (if you mean all notes in a sequence share the same pitch) will not work Since the RNN would collapse. > **The results for the proposed approach show a slight or moderate improvement compared to the K=0 baseline. …(see, e.g., Figure 3, K=0 vs K=4).** **Reply 5**: It may be a bit misleading if we only focus on Fig 3 in order to understand the effectiveness of SPS. If we take a comprehensive view of the experimental results, i.e. considering Fig 3 alongside Fig 11, it becomes evident that the introduction of timbre factors makes the improvement (of the linearity of pitch) from physical symmetry much more pronounced. As the task difficulty increases, the role of symmetry becomes increasingly significant. > **From the title, it is not clear that the approach is applied to the music or image domain. Neither title nor abstract clarify that the proposed approach relies on time-series data.** **Reply 6**: We mention physical symmetry in the title and video and audio in the abstract, which implies what we deal with is time-series data. For the sake of clarity for readers, we will indicate time-series in the revised abstract. > **The abstract, intro and conclusion suggest that using (a) using physical symmetry for learning and (b) representation augmentation are two different contributions of the paper, when in fact, they refer to the same thing (i.e., the proposed idea).** **Reply 7**: We do not think they are the same things though they are strongly related in this work. Using physical symmetry is a novel methodology to unsupervisedly learn interpretable representation from time-series data, while representation augmentation is just one way to implement it. Of course there might be other pure mathematical ways (as opposed to using imaginary samples) to enforce physical symmetry constraints but we have not found them yet. In addition, representation augmentation can also help methods other than physical symmetry that regularise the latent space. Therefore, we regard them as two different contributions. > **I think the limited usefulness of the VAE compared to the AE stems from the fact that the datasets used are so simple. There are simply no sources of variability that would need to be modeled by the Gaussian prior.** **Reply 8**: In general, VAE yields better representations than AE even when the data is simple and the Gaussian prior works very well even on non-Gaussian data. Our work indicates that physical symmetry could be an inductive bias which could 1) substitute the gaussian prior for some time-series tasks and 2) at the same time be more interpretable. > **Section 2 (Intuition) is actively unhelpful. Instead of a clean explanation (using example applications!).** **Reply 9**: The main contribution of the paper is to draw inspiration from modern physics to address a key challenge in machine learning: how to learn interpretable low-dimensional factors from unlabeled data without relying on domain-specific knowledge. Therefore, the writing in the beginning focuses a lot on high-level abstraction and our vision & intuition, which aims to help readers to analogously understand the big picture and why physical symmetry is a general inductive bias. Above all, an outstanding paper is not merely a tech report, but also a story with high-level and inspiring ideas. We hope the significance of this text could be reconsidered. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed responses to all reviews and for answering some of my questions! Overall, I think this paper has a very interesting idea at its core and I would not oppose accepting it. Some additional comments: - Reply 1: Thanks for clarifying it. I suppose for the simple synthetic datasets used, most of the heavy lifting seems to be done by the reconstruction loss. It will be interesting to see in future work how much physical symmetry can contribute for more complicated scenarios and real-world datasets. - Reply 7: OK. I would suggest further emphasizing the distinction between physical symmetry as "concept" and representation augmentation as "implementation". - Reply 9: I suppose we fundamentally disagree on this. I believe claims in scientific texts should not go beyond what is shown in the results provided or supported by literature. The results in this paper certainly do not support any speculation about "the human mind". I hope that, even if we disagree on the subject of Reply 9, my comments on the writing might help to make the introduction a little bit clearer. In particular, consider emphasizing the application examples (pitch sequences/moving ball) and the autoencoder structure (i.e. reconstruction!) of the model. --- Reply to Comment 1.1.1: Comment: We would like to thank you again for the appreciation of our idea and the willingness to lean toward accepting it. Also, both the original critiques and additional comments helped us better understand the writing from a reader's perspective and we will certainly follow some important suggestions to make it clearer. In particular, trying to make it clear which part is pure conceptual comprehension and which part is implementation. In the revision, we will try to use a more rigorous expression to assist readers in comprehending physical symmetry through analogical reasoning.
Summary: The paper presents a new way of training a self-supervised system that includes a data augmentation module in the latent space that leverages physical symmetry. This introduction of a transformation covariance in the latent space while training helps learn interpretable, robust and data-efficient representation. An evaluation is performed on synthetic data in two domains: music and videos. The evaluation shows that the new training system is able to learn a presentation that fits human perception. Strengths: 1) The paper is very well written and very interesting 2) The idea of leveraging symmetry by enforcing latent space covariance seems quite novel to me and could have an impact in the field of Self-Supervised learning. 3) The method seems to perform as claimed on the presented evaluation (on toy datasets). Weaknesses: 1) The evaluation is done only on toy problems with synthetic datasets. 2) Part of the evaluation is not very convincing for several reasons: a) First experiment: - The task is too easy for the reported metrics: the R2 scores are saturated to 1, which makes the comparison with the baseline (SPICE) quite questionable. - The ablation study (K=0) is a bit disappointing, as the improvement of using symmetry (K=4) doesn't seem significant. So the good properties of the learned representation seem more linked to the encoder/decoder architecture than to the use of symmetry covariance. b) sample efficiency: the correlation between K and sample efficiency is quite unclear in figure 6: for 512 and 2048 samples, K=4 and K=16 seem to have very similar performances. 3) It's not trivial how the dimension of the latent space and the symmetry should be chosen for problems a bit more complex than the toy ones presented in the evaluation. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: I really don't get why physical symmetry could be seen as a counterfactual inductive bias. Any latent representation model could be used to answer "what if" questions about the model (or rather the latent representation itself), which doesn't make them counterfactual inductive. Could the author be more specific about what they mean here? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: 1) The link between the group of symmetry to be used for training and the actual task may not be straightforward for tasks a bit more elaborated than the ones proposed in the paper. 2) The claimed interpretability and the link with causality are questionable: a) Sample efficiency and extendability are very barely linked to interpretability b) The "interpretability" of the latent representation in the experimental part is largely due to design choices on the latent space and the symmetry, which can be considered as domain-knowledge priors. So I don't think the model helps generate interpretable representations but is only a way of encoding domain-knowledge priors. For instance, when the authors claim, "our model learns a more interpretable pitch than...", the model is actually not learning a more interpretable representation, but rather the prior was correctly handled by the symmetry. Same when they claim "... leads to a better enforcement of interpretability". It would be probably safer to talk about "better enforcement of the prior". The example with the video has similar issues: implicitly, the chosen symmetry considers that the complicated dimension (height), which as a non-linear behavior, should be kept free, while the two others can be easily contained. This really sounds like a domain knowledge prior. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for providing the feedback and raising important questions and concerns. We answer these questions here and also like to take the opportunity to address some concerns raised in weaknesses/limitations. > **Why could physical symmetry be seen as a counterfactual inductive bias?** **Reply 1**: We consider the physical symmetry as a counterfactual inductive bias on latent dynamics. In other words, it addresses the question if z_[1:t] is transformed, what z_[t+1:k] would be. Other approaches, like Variational Autoencoders (VAEs), do not consider the temporal aspect of the latent z. Therefore, we do not consider them to be counterfactual inductive biases. Thank you for bringing up this issue, which made us aware of the lack of clarity in our description. We will make sure to clarify this in the revised version. > **The choices on the latent space and the symmetry group assumptions sound like domain knowledge?** **Reply 2** We distinguish the difference between symmetry assumptions and domain knowledge through the following three aspects. **1) Representation augmentation requires much less knowledge than data augmentation.** Consider pitch extraction. Almost all related works use pitch-shift augmentation, but on the data and not on the representation. It requires the knowledge of how to transform the input data to shift the pitch. Simple techniques include time stretching and spectrogram frequency stretching, and advanced techniques (vocoders etc.) involve shifting harmonics without changing the timbre. In comparison, SPS just translates the representation vector. As for the vision task, it is not clear at all how to apply 3D spatial translation-rotation augmentation to unlabelled video. With our SPS method, one only needs to translate and rotate a representation vector of length 3. **2) Domain knowledge and representation symmetry belongs to two different conceptual levels.** When we pick group assumptions for a given problem, the kind of knowledge required is more abstract (and general) than what we usually call “domain knowledge”. It is us human’s beliefs about concepts themselves, not how concepts are encoded from data. Concretely, recall that music and vision are two vastly different fields. Their respective domain knowledge systems look nothing like each other. In comparison, SPS solves one task with 1D translation and the other with 2D translation-rotation. Their forms look overarchingly similar! To generalise, drastically different systems can share formally similar physical symmetries. The “expertise” involved in picking symmetries is utterly different from what “domain expertise” refers to. **3) We may not even need a correct symmetry assumption for successful representation learning with SPS.** The intuition section of our paper mentions that many modern physicists *start* with symmetry (in law) assumptions *and then* obtain testable theories of physical laws. Notice: first there are guesses about symmetry, and after that, “domain knowledge” is created. What it means is that symmetry is so fundamentally general that proposing symmetry assumptions is an efficient way of regularising concepts, even before we settle down with any domain knowledge. One is free to try multiple symmetry assumptions and see which ones learn better representations. Our experiments (section 5.3) already show that even with *incorrect symmetry assumptions* SPS can still learn to extract 3D Cartesian coordinates from the unlabelled bouncing ball dataset. > **Part of the evaluation is not very convincing,R2 scores are saturated to 1, which makes the comparison with the baseline (SPICE) quite questionable. The ablation study (K=0) is a bit disappointing, as the improvement of using symmetry (K=4) doesn't seem significant. So the good properties of the learned representation seem more linked to the encoder/decoder architecture than to the use of symmetry covariance.** **Reply 3**: The $R^2$ score may be a bit misleading if we only focus on Table 1. If we take a comprehensive view of the experimental results, i.e. considering Table 1 alongside Figure 12, it becomes evident that the introduction of timbre factors makes the $R^2$ improvement from physical symmetry even more pronounced. As the task difficulty increases, the role of symmetry becomes increasingly significant. > **sample efficiency: the correlation between K and sample efficiency is quite unclear in figure 6: for 512 and 2048 samples, K=4 and K=16 seem to have very similar performances.** **Reply 4**: We are not trying to show a linear relation between K and interpretability (linear project loss). The point is that using representation augmentation is better than not using it, regardless of the dataset size. Additionally, it can be observed that increasing the amount of data makes it easier for the model to learn interpretable representations, so the interpretability provided by representation augmentation saturates when the data size reaches 2048. Furthermore, we are unable to determine the optimal value of K for different dataset sizes. We will consider revising the original text to clearly highlight this point, thank you for bringing it up. --- Rebuttal Comment 1.1: Title: Answer to rebuttal Comment: Dear authors, Thank you for carefully reading my review and answering my comments. Reply 3 and 4 clarified my understanding of your experimental claims. I'm still puzzled about the physical symmetry being responsible for counterfactual inductive bias, while any method that would predict future samples of a time series from past ones could be considered counterfactual (while the underlying causal mechanism is usually quite questionable). Why the use of physical symmetry would make the system "more causal"? I'm still unconvinced about the distinction between domain knowledge and physical symmetry design. I still think that to choose your symmetry, you still need implicit domain knowledge. For both experiments, you do use some sort of domain knowledge about the underlying data to pick the symmetries: for instance, for the videos, the two horizontal coordinate dimensions have a different role than the height, domain knowledge that you somewhat use to pick a different symmetry on the two first latent dimensions and on the third one. And this seems necessary to keep a bit of interpretability. Though it may only be a question of your definition of "domain knowledge," which seems to be a bit more restrictive than mine. Overall, I still think the paper is an interesting contribution that is worth to be published. --- Reply to Comment 1.1.1: Comment: Thank you for your comments. We hope the following explanations would make what puzzled you clearer. According to Pearl and Mackenzie [1], the “Ladder” of Causality has three levels: Seeing (statistics), Doing (intervention), and Imagining (counterfactuals). A higher level means “more causal”. If the model only learns statistical correlation between the past and the future, it is at level 1 – Seeing, which is pure statistics. Many time-series models (e.g., vanilla HMM and RNN) are at this level. Intervention is about doing — observing what happens when changes are made. Counterfactuals can be thought of as "imaginings". It allows us to consider specific scenarios that are inexpressible at level 1 or 2. E.g., for the video experiment, training with physical symmetry means the model can imagine how changes of the past would affect changes in the future (I.e. $(R(S(z_{1:T}))) = S(z_{T+1})$) for each trajectory. By training with representation augmentation, our learned model can predict and reconstruct unseen trajectories, through the translation or rotation transformation on its latent $z$. Therefore, physical symmetry transcends our time-series model from level 1 to level 3, and we consider such inductive bias counterfactual. Regarding domain knowledge, we agree that there may be no strict lines to distinguish between domain vs. non-domain knowledge. Our point is that physical symmetry yields much looser assumptions on the underlying data. Let’s take the video case as an example, the only strict assumption here is that the underlying representation of each data sample is low-dim — 3D. Here, we need ​​a comprehensive view of the experimental results. From Figure 8, we see that even with incorrect symmetry assumptions, the model can still learn interpretable 3D cartesian space. Hence, the “correct” symmetry assumption (which seems to incorporate the domain knowledge of a real 3D cartesian world) is actually not necessary. Additionally, the 3D assumption actually doesn’t even imply the domain knowledge of a 3D *cartesian* coordinate. In our early experiments (which we didn’t put in the paper), we accidentally made each trajectory start from the same location. As a result, the model learned a cylindrical coordinate system (in which the first 2d is a polar coordinate). Therefore, physical symmetry is really an inductive bias tailored for the latent space, which puts minimum assumption on the data (of a specific domain). If there is anything implicit, it’s fair to say that physical symmetry implies both visual domain and auditory domain in the same fashion, so it is a more generalized inductive bias. [1] Pearl, J., & Mackenzie, D. (2020). The book of why: the new science of cause and effect. Basic books.
Summary: This paper utilizes the concept of physical symmetry as a self-supervised constraint within an auto-encoder framework to enhance the learning of interpretable and disentangled representations. The authors validate their approach through experiments conducted on unlabelled monophonic music audio and monocular videos featuring a bouncing ball. The results demonstrate high projection accuracy and successful disentanglement of style and content. Additionally, the paper introduces representation augmentation techniques that improve sample efficiency and enhance interpretability. Strengths: 1. This paper is well-written and easy to understand, demonstrating a cohesive and structured presentation of the research. 2. Convincing experiment results, which align with the stated objectives, provide strong support for the findings of this study. 3. The representation augmentation technique is novel and supported by empirical evidence showcasing its effectiveness. 4. The results on color-texture disentanglement is clear and interpretable Weaknesses: 1. The experimental setup seems to be somewhat simplistic despite the experimental results being good 2. Only few experiments on real-world dataset are provided, which is not able to fully demonstrate the generalizability and applicability of your method. 3. There is a lack of theoretical evidence of your proposed method and further explanation of the relationship between physical symmetry and representation learning. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Could you explain what a random-pooling layer does and the details of how you split the style and content vector? Have you ever tried other methods to disentangle the style and content part? 2. Will the sequence length (the value of T) affect the final result? 3. Could you describe any unique aspects or challenges of the dataset you have proposed? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: 1. The generalizability and applicability for real-world dataset may be limited. 2. The proposed dataset should be more challenging for further utilization on research Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive review! We really appreciate the concise summarisation on both the strengths and the current limitation of the paper. We hereby respond in a breakdown format. > **Could you explain what a random-pooling layer does and the details of how you split the style and content vector? Have you ever tried other methods to disentangle the style and content part?** **Reply 1**: Random pooling simply means we randomly pick a vector $z_{i, s}$ as $z_{\tau, s}$ from the style factor sequence $\textbf{z}_{1:T, s}$. This operation encourages the learned style representation to be constant over time. Also, To split the style and content vector we first cut the vector $z$ into two parts. If $z$ has $n$ dimensions, we choose $z[0:m]$ as $z_c$ and $z[m:n]$ as $z_s$, where $n$ and $m$ are hyperparameters (For the audio problem $m=1$ and $n=2$. For the video problem $m=3$ and $n=2$). As for other methods of disentanglement, we haven’t tried any since a random pooling (our first try) simply works within an SPS+ framework. We certainly note that the content-style definition imposed by SPS+ is very simple — content refers to representations that change over time and adhere to physical symmetries, while style refers to representations within the same sequence that remain unchanged over time (global invariance). In general, there can be meaningful factors that change over time but whose patterns of change do not adhere to physical symmetries. In such cases, we propose to combine SPS with other methods for decoupling in the future. For instance, applying SPS to Causal Variational Autoencoders (CVAEs), disentangled GANs, or contrastive/non-contrastive methods. Actually, an ongoing experiment of our team is to combine VICReg [1] and SPS to obtain more interpretable content factors, and preliminary results suggest that such combination outperforms either SPS or VICReg alone on some non-trivial cases. [1] Bardes, A., Ponce, J., & LeCun, Y. (2021). Vicreg: Variance-invariance-covariance regularization for self-supervised learning. arXiv preprint arXiv:2105.04906. > **Will the sequence length (the value of T) affect the final result?** **Reply 2**: The sequence length T has no direct effect on learning interpretable factors, but has an indirect effect. A sufficiently long sequence can ensure that the prior model (RNN) learns meaningful system dynamics, and this is the premise for physical symmetry constraints to work. If the prior model can already learn the correct system dynamics with a small T, increasing the sequence length will no longer help learn interpretable representations. > **Could you describe any unique aspects or challenges of the dataset you have proposed?** **Reply 3**: Due to the small size of our dataset, it is challenging for existing unsupervised learning methods to learn meaningful interpretable representations. The audio dataset (as described in A.4.1) contains *only* 2400 audio clips played by multiple instruments. In the computer music domain, we know how hard it is to unsupervisedly 1) disentangle pitch and timber, and 2) learn a linear pitch concept. As far as we know, all prior works incorporate strong domain knowledge, such as pitch shifting for pseudo-label generation [1, 2] or using instrument labels [3]. Likewise, learning concepts such as 3D coordinates from unlabelled video has long been a far-fetched fantasy for CV researchers. To be fair, in recent years we have seen exciting progress on this front. For example, LEAP [4] performs physically meaningful representation disentanglement via causal discovery. But even LEAP *requires* not only independent noises but also a sufficient causal structure (e.g., five or more balls interacting in the same scene via springs forces) in order to learn disentangled location factors. [1] Luo, Y. J., Cheuk, K. W., Nakano, T., Goto, M., & Herremans, D. (2020, October). Unsupervised Disentanglement of Pitch and Timbre for Isolated Musical Instrument Sounds. In ISMIR (pp. 700-707). [2] Gfeller, B., Frank, C., Roblek, D., Sharifi, M., Tagliasacchi, M., & Velimirović, M. (2020). SPICE: Self-supervised pitch estimation. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 28, 1118-1128. [3] Luo, Y. J., Agres, K., & Herremans, D. (2019). Learning disentangled representations of timbre and pitch for musical instrument sounds using gaussian mixture variational autoencoders. arXiv preprint arXiv:1906.08152. [4] Yao, W., Sun, Y., Ho, A., Sun, C., & Zhang, K. (2021). Learning Temporally Causal Latent Processes from General Temporal Data. arXiv preprint arXiv:2110.05428. --- Rebuttal Comment 1.1: Comment: I greatly appreciate your comprehensive review and the thoughtful responses provided to my questions. Your explanations are both clear and well-justified. The proposed method is straightforward and supposed to be great, and I'm still looking forward to the potential for even more interesting frameworks for manipulating the style and content vectors. In summary, your paper introduces a novel and effective method, presented within a well-organized structure and expressed with well-crafted sentences. I believe it is a good paper deserving of acceptance. --- Reply to Comment 1.1.1: Comment: We are glad our responses have contributed to your evaluation and confidence in our work. Thank you once again for your support.
Summary: This paper proposes a symmetry equivariance constraints on the transition dynamics of latent representations for time-indexed data. The claim is that imposing these certain symmetries create interpretable model representations that correspond to popular domain-specific representations in the audio and video domains. Experiments on audio demonstrate how enforcing a translational symmetry recovers an interpretable one-dimensional latent factor corresponding to pitch. Experiments on video show how enforcing translational and rotational symmetries recovers interpretable three-dimensional latent factors corresponding to spatial coordinates. Ablations also show some robustness of learning to mis-specification of symmetries. Strengths: This is an inspiring paper! The paper is well-motivated and well-written. The methodology is clearly described and easy to follow (although I do think there might be an even better probabilistic formulation/presentation of the training objective; see the Questions section). The methods make sense and I feel confident that I could implement these ideas myself, based on the description in the paper. The experiments are well-executed and support the hypothesis that physical symmetries can provide a powerful inductive bias, at a higher level of abstraction than more domain-specific approaches. Weaknesses: My only major criticism is that the experiments are in somewhat "toy" settings. It would be interesting to see an application of these ideas to a more significant problem, with stronger baselines. A very minor criticism of the (otherwise excellent!) introduction: "Such an approach is very different from human learning; even without formal music training, one can at least perceive pitch, a fundamental music concept, from the experience of listening to music." I think this claim is too strong. It is not clear how much of human perception is learned from experience, vs. baked in to our genetics and brain structure. This claim isn't important to your central argument, so I suggest moderating it a bit. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Is it possible to frame the training objective (Equation 1) as a proper probabilistic loss? See [1] (the static case) and [2] (the time-indexed case) for the probabilistic formulations of the Gaussian prior. If this is possible, it might help to clarify why/whether it is possible to drop the KL regularization term. Does this work relate to previous work on imposing symmetries on intermediate layers of neural networks? E.g. [3] and [4]. I am not deeply familiar with that line of work, but I'm bringing it up because it might be relevant (and at the very least, you might find it interesting if you aren't already aware). [1] Auto-Encoding Variational Bayes. Diederik P. Kingma, Max Welling. [2] A Recurrent Latent Variable Model for Sequential Data. Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron Courville, Yoshua Bengio. [3] Deep Symmetry Networks. Robert Gens, Pedro Domingos. [4] Group Equivariant Convolutional Networks. Taco S. Cohen, Max Welling. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The experiments presented in this work use low-dimensional latent spaces: 1 dimension for audio and 3 dimensions for video. It is not clear how to adapt these methods to the high-dimensional latent spaces commonly used (and often required) for expressive models where model performance and capacity is prioritized more highly than interpretability. Furthermore, it is not clear what symmetries we ought to impose upon high-dimensional latent spaces. That said, these questions are clearly beyond the scope of the present work: I see this limitation more as an interesting avenues for future investigation rather than a weakness of the present work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for giving the insightful feedback and raising such important questions. Below we respond in a breakdown format. > **Is it possible to frame the training objective (Equation 1) as a proper probabilistic loss? See [1] (the static case) and [2] (the time-indexed case) for the probabilistic formulations of the Gaussian prior. If this is possible, it might help to clarify why/whether it is possible to drop the KL regularization term. [1] Auto-Encoding Variational Bayes. Diederik P. Kingma, Max Welling. [2] A Recurrent Latent Variable Model for Sequential Data. Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron Courville, Yoshua Bengio.** **Reply 1**: Thank you for referencing [1] [2] to help us in obtaining a sound probabilistic formulation for SPS. We have been working on the theoretical grounding for SPS, but to be very frank there are no concrete results yet. We’d like to share our current direction: First, assume the dataset follows a certain symmetry. Then, show that including the representation augmentation terms in the training objective is a way of imposing a prior and maximising the likelihood of the dataset. Specifically, given the symmetry assumptions, rewrite the likelihood of the dataset to include a lower bound that is maximised by representation augmentation training (following the assumed symmetry). We will greatly appreciate your feedback in this regard! We look forward to the discussion phase and consider it as an opportunity to collaboratively discover a probabilistic formulation, in which case we will eventually and duly acknowledge your input. > **Does this work relate to previous work on imposing symmetries on intermediate layers of neural networks? E.g. [3] and [4]. I am not deeply familiar with that line of work, but I'm bringing it up because it might be relevant (and at the very least, you might find it interesting if you aren't already aware). [3] Deep Symmetry Networks. Robert Gens, Pedro Domingos. [4] Group Equivariant Convolutional Networks. Taco S. Cohen, Max Welling.** **Reply 2**: Thank you for referencing [3] [4] as they are very exciting to read and indeed very related to learning with symmetry. [3] [4] are in the same line with [5] [6] (referenced in the paper); they emphasise some equivariant relations **between signal x and representation z**: When a certain transformation is applied to x, z should keep invariant or follow a similar/same transformation. Specifically, [3] [4] develop network architectures invariant to continuous group transformations of input data and intermediate feature maps. On the contrary, SPS **deals with z only**, and its time-series equivariance. Our paper’s major contribution is to “escape” the signal x space and only prescribe constraints in the representation z space. There’s actually another reason we appreciate your references so much: [3]’s review on Lie groups and object orbits (section 2) gave us new insights to our method. [3] is concerned with classification tasks, characterised by their VC dimension and sample complexity, which they show can be reduced by using symmetry if object orbits are homogeneously labelled. That is analogous to our method if we consider energy-based models (EBMs) of time series. An EBM outputs an energy (scaler) given one trajectory z[1:T] which, intuitively, denotes how “physically realistic” the EBM assesses the trajectory to be. This way, the energy forms object orbits under specific symmetry assumptions, and we require each object orbit to have homogenous energy. This analogy may eventually enable new theoretical groundings for our formulation of SPS. As such we are extra thankful that you referenced [3] [4]. [5] Dupont, E., Martin, M. B., Colburn, A., Sankar, A., Susskind, J., & Shan, Q. (2020, November). Equivariant neural rendering. In International Conference on Machine Learning (pp. 2761-2770). PMLR. [6] Sanghi, A. (2020). Info3d: Representation learning on 3d objects using mutual information maximization and contrastive learning. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIX 16 (pp. 626-642). Springer International Publishing. > **A very minor criticism of the (otherwise excellent!) introduction: "Such an approach is very different from human learning; even without formal music training, one can at least perceive pitch, a fundamental music concept, from the experience of listening to music."I think this claim is too strong. It is not clear how much of human perception is learned from experience, vs. baked in to our genetics and brain structure. This claim isn't important to your central argument, so I suggest moderating it a bit.** **Reply 3**: Thank you for your kind advice. We recognize that such a strong claim calls for supportive evidence, but the psychological experiments that would prove/disprove our claim are not there yet to our knowledge. We will consider removing/soften the claim for the sake of soundness — thank you for bringing it up. --- Rebuttal Comment 1.1: Comment: > We’d like to share our current direction: First, assume the dataset follows a certain symmetry. Then, show that including the representation augmentation terms in the training objective is a way of imposing a prior and maximising the likelihood of the dataset. Specifically, given the symmetry assumptions, rewrite the likelihood of the dataset to include a lower bound that is maximised by representation augmentation training (following the assumed symmetry). Yes, this is approximately what I had in mind. My point was that the KL divergence term of the VAE loss arises naturally by from defining a proper probability distribution over the join distribution p(x,z) on observations x and latent variables z, and constructing a lower bound to tractably optimize the marginal distribution p(x). In the simplest setting, the prior p(z) is chosen to be Gaussian. It seems possible to me (I admittedly have not fully thought through the idea) that, by encoding a symmetry structure into your prior over the latent sequence, the lower-bound derived by the usual VAE argument would correspond to your Equation 1. To be very clear: the above discussion is beyond the scope of the reviewing process. **I think this is a good paper and that it should be accepted** in its current form (+ any revisions to address other reviewers' concerns). Formalizing SPS in a probabilistic framework would be an interesting subject for future work. --- Reply to Comment 1.1.1: Comment: > It seems possible to me (I admittedly have not fully thought through the idea) that, by encoding a symmetry structure into your prior over the latent sequence, the lower-bound derived by the usual VAE argument would correspond to your Equation 1. Thank you for your insight on formulating SPS in a probabilistic framework! We find it a good place to start formalizing exisitng intuitions. We agree that it is a subject for future work. Thank you for your detailed study of our work and the constructive, positive review!
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This study introduces a new methodology to learn interpretable representations from data by incorporating physical symmetry as a self-consistency constraint in the latent space. It addresses a key challenge in machine learning: how to learn interpretable low-dimensional factors from unlabeled data without relying on domain-specific knowledge. The method, named Self-Supervised learning with Physical Symmetry (SPS), uses the concept of physical symmetry to ensure that the model's learned dynamics are invariant under certain transformations. The research applies SPS in two domains: music and computer vision. In music, SPS successfully learns a linear pitch factor from unlabeled monophonic audio, and in computer vision, it learns a 3D Cartesian space from unlabeled videos of a simple moving object. The study suggests that the use of physical symmetry could lead to a new technique called representation augmentation, which enhances the model's sample efficiency. The authors draw inspiration from the approach in modern physics where scientists often start from a symmetry assumption to derive laws and predict properties of fundamental particles. This idea of symmetry as a fundamental guiding principle is being used as an "inductive bias" for their representation learning model, helping to create an interpretable low-dimensional latent space. They view symmetry in physical law not only as a design principle of nature, but also as a cognitive bias of the human mind. This suggests that they believe the learning model should reflect the same intuitive understanding of symmetry. Physical symmetry is also used as the basis for a learning technique called "representation augmentation". This involves generating additional pairs of training samples from existing ones by applying certain group transformations, which are informed by the symmetries. This process helps to improve sample efficiency and imposes a regularization effect on the learning model. Finally, through the lens of causality, the authors describe physical symmetry as a counterfactual inductive bias. This means the learning model is designed to ask "what if" questions, specifically what would happen if predictions were based on transformed latent codes. This concept also provides constraints on the model's encoder and decoder components, since they are trained end-to-end, keeping the overall structure and function of the model consistent with the underlying symmetry principle. Key Objectives: To introduce and explore the use of physical symmetry as a self-consistency constraint in the latent space of time-series data. To develop the SPS methodology that applies physical symmetry to the prior model in an encoder-decoder framework. To demonstrate the application of SPS in learning a linear pitch factor from unlabeled monophonic music audio, without any domain-specific knowledge about pitch scales. To apply the SPS methodology in the computer vision domain, specifically learning a 3D Cartesian space from unlabeled videos of a simple moving object. To examine the desirable properties of SPS, including conciseness, sample efficiency, robustness, and extendability. To prove that even with an incorrect symmetry assumption, SPS can still learn more interpretable representations than baseline models. To demonstrate the possibility of combining SPS with other learning techniques, allowing for content-style disentanglement from temporal signals. Strengths: Inductive Bias - By leveraging physical symmetry as an inductive bias, the learning model aims to uncover meaningful and interpretable low-dimensional representations from high-dimensional data. This approach aids the model in abstracting critical features of the data, such as the pitch of music notes or the 3D location of a moving object, while filtering out less relevant information. Regularization and Model Robustness - The method uses symmetry-based representation augmentation and transformations, which generate additional training samples. This augmentation process improves sample efficiency and imposes regularization on the model, reducing overfitting. It also helps in maintaining consistency in predictions, thereby improving model robustness. The model is designed with self-consistency in mind. The prior model's predictions for a transformed version of the latent space and the original version should be close, and so should their decoded reconstructions. This self-consistency constraint enhances the regularization of the latent space. Training Objectives - The total loss function used in the model is a combination of several loss terms including reconstruction loss, prior prediction loss, symmetry-based loss, and KL divergence loss. This diversified loss function enables the model to optimize different aspects of the learning process, contributing to a better overall model performance. Flexibility and Generality - This approach is versatile and can be applied to different problems. Different group transformations are used for different problems, demonstrating the method's ability to adapt to different contexts. Variants - The model has different versions like SPSVAE and SPSAE, providing flexibility based on the specific needs of the task at hand. For instance, SPSAE could be used when a simpler model architecture without the KL divergence term is sufficient. Weaknesses: Dependence on Correct Symmetry Assumptions: While the experiments show that the method is robust to incorrect symmetry assumptions, choosing the right symmetry transformation S is crucial for optimal results. Incorrect assumptions can still lead to an inferior performance. Real-world data may contain complex symmetries not known a priori, which could lead to difficulty in correctly specifying these assumptions. Dependence on Augmentation Factor: The efficiency and effectiveness of the model are significantly affected by the augmentation factor K. Choosing an optimal K is a challenge and requires extensive experiments. A poorly chosen K could result in less efficient training or lower performance. High-Dimensional Latent Space: While the method has been demonstrated on 1D and 3D latent spaces, its effectiveness in higher-dimensional latent spaces is not fully explored. Real-world data can often have high-dimensional latent spaces, and it may be challenging to maintain the same level of performance in these cases. Overfitting: While representation augmentation can increase the robustness of the model, applying it excessively might lead to overfitting, especially when data is scarce. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: How do the authors propose this approach would be scalable for other tasks and what are the ways to prevent overfitting with the approach? SPS can still learn interpretable representations even with incorrect symmetry assumptions. What might be the theoretical basis for this? Could there be instances where this doesn't hold true? How does the proposed SPS model relate to, or differ from, other models or techniques used in representation learning? Could SPS be integrated with other methods to address its limitations? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Authors do inform about the range of limitations for the work Partial Information Capture: The authors note that when the underlying concept following physical symmetry only encompasses a part of the information present in the time series data and cannot fully reconstruct the inputs, SPS may not work effectively. This issue is potentially related to content-style disentanglement. Inability to Distill Concepts from Multibody Systems: The current model struggles with learning concepts from systems with multiple interacting components. As examples, the authors mention that SPS fails to learn the concept of pitch from polyphonic music, or understand 3D space from videos featuring multiple moving objects. Lack of Formal Theory for Quantification: The authors highlight the need for a formalized theory to quantify the impact of representation augmentation. Such a theory could help measure the degree of freedom in the latent space both with and without physical symmetry, and help explain why even incorrect symmetry assumptions can still lead to correct and interpretable latent space. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review. We sincerely appreciate your comprehensive understanding of this paper’s vision and strengths as well as its current limitations. Additionally, thank you for the in-depth questions. We’d like to address them here to the best of our knowledge. > **How do the authors propose this approach would be scalable for other tasks** **Reply 1**: If a dynamic system has a low-dim intrinsic latent space (similar to the 1D pitch or 3D location tasks), we expect SPS to generalise well. For tasks with *high-dim* latent factors, as the review has pointed out, applying SPS may be challenging. Similar to modern physics, trying multiple symmetry hypotheses becomes the first step. For that, we foresee an “auto-SPS” in the future which automatically *searches* for symmetry hypotheses given the search scope. Since figure 8 shows that a range of symmetry assumptions may all lead to interpretable representations, such a search has a higher probability to hit relevant symmetry assumptions. As for tasks where physical symmetry alone is not strong enough to regularise concepts, we will need to combine SPS with other methods. We already have a simple example in our appendix: SPS+, which takes advantage of the temporally invariant style factor to enable SPS’s effect on the content factors. Alternatively, if the style factors are intractable and we do not aim to model the style factors’ temporal dynamics, we can use SPS with VICReg [1] to obtain more interpretable content factors, and preliminary results suggest that such combination outperforms either SPS or VICReg alone on some non-trivial cases. >**What are the ways to prevent overfitting with the approach?** **Reply 2**: Thank you for raising such a profound question. For SPS, overfitting may be caused jointly by insufficient data and the symmetry assumption. With a limited amount of data, there may be multiple ways of describing the system dynamics, and all of them may conform to the assumed symmetry, but some yield more natural representations than others. We encountered such a situation before. In the bouncing ball experiment, if all trajectories in the dataset overlap at a single point, SPS will treat that point as the origin and learn polar coordinates to represent the ground plane instead of Cartesian coordinates. Given the special dataset, polar coordinates fully adheres to the prescribed symmetry constraint. If the goal is to learn a unique interpretable representation without obtaining more data domains, the only way may be to add new symmetry constraints. However, interpretable representations are not unique to begin with. Even if the model doesn’t yield the expected representations, it has "discovered" new interpretable representations (e.g. polar coordinates), which can still be valuable. Analogously, if relativity represents the true system dynamics, then the Newtonian mechanics essentially overfits to macroscopic low-speed phenomena, yet it yields interpretable representations. > **SPS can still learn interpretable representations even with incorrect symmetry assumptions. What might be the theoretical basis for this? Could there be instances where this doesn't hold true?** **Reply 3**: That is an important question and we thank the reviewer for emphasising it. Our preliminary ideas include using measurement theory to characterise learnable representation dynamics with and without SPS. The correct symmetry would be the strictest equivariance constraint that is physically true. Other constraints are either unphysical or loose. Therefore, the theory that predicts the effect of incorrect symmetry assumptions can very possibly be the same theory that explains the efficacy of SPS itself, showing exactly how SPS constrains the representation dynamics. However, for a full theory please expect future work. It will be much appreciated if you would point us some directions during the discussion phase! While the rigorous mathematical theory behind the efficacy of SPS remains an open question, we can still consider SPS training with incorrect symmetry as a general regularisation problem in machine learning. A toy analogy is ridge regression – we know it leads to better results if the true prior is Gaussian. However, if we assume an incorrect mean, variance, penalty weight, or even to use lasso instead of L2 norm, it can still bring some benefits as long as the incorrect assumption is not “too far” from the true one. Again, to quantify “too far”, we will need a formal theory, and in Figure 8, we consider the cases to be not “too far”. > **How does the proposed SPS model relate to, or differ from, other models or techniques used in representation learning? Could SPS be integrated with other methods to address its limitations?** **Reply 4**: From a energy-based model point of view, SPS is a regularised, non-contrastive method. It is not an architectural method, making it compatible with other techniques. It is not a contrastive method since SPS doesn’t need contrastive samples. It is highly related to VICReg [1]; both try to find a general regularised inductive bias for the latent space, but VICReg maximises information content of predictable representations while SPS purely considers the dynamics of z time-series. Since representation augmentation is generally agonistic to the specific training pipeline, SPS can be integrated with other methods to address its limitations. In the paper we already used SPS in conjunction with VAE. In our later experiments, we try to solve an “unconstrained style factor problem” (related to A.3) by contrastive training with SPS+VICReg. VICReg is in charge of preventing representation collapse without any decoder and SPS regularises the content factor. [1] Bardes, A., Ponce, J., & LeCun, Y. (2021). Vicreg: Variance-invariance-covariance regularization for self-supervised learning. arXiv preprint arXiv:2105.04906.
null
null
null
null
null
null
Differentiable and Stable Long-Range Tracking of Multiple Posterior Modes
Accept (poster)
Summary: The paper proposes the "importance weighted samples gradient (IWSG)" estimator and describes its integration into a "mixture density particle filter (MDPF)" for state space architectures. Similar to regularized particle filters, the MDPF framework represents the continuous state posterior with a continuous mixture density, which is differentiable and can be trained end-to-end using the proposed IWSG. As a discriminative approach, the proposed particle filter does not require the specification of a generative model and, in contrast to other discriminative particle filters, returns unbiased and low-variance gradient estimates. Strengths: - Efficient and accurate particle-based estimation of complex posteriors in sequence models is an important topic and has a long history in the machine learning community. - The paper includes an easy-to-follow recap of particle filtering and puts the proposed approach in context with many relevant discriminative particle filters, including a thorough discussion of their weaknesses that motivates both IWSG and MDPF. - The proposed discriminative particle filter tackles a number of well-known challenges, such as differentiation through the resampling step, biased or high-variance gradient estimates, and limited expressiveness of the posterior parameterization. - The experiments compare the proposed particle filter to a broad spectrum of relevant baselines and demonstrate MDPFs superior performance in three state estimation tasks. Weaknesses: - One of my concerns with this paper is its confusing presentation. The text often jumps between motivation, technical background, and related work, making it difficult to follow the intended train of thought. This problem is further aggravated by the fact that there are multiple technical streams: generative vs. discriminative particle filters, unbiased/low-variance gradient estimation, and inference in mixture models. There are many balls in the air and the paper has a hard time telling a streamlined story. Unfortunately, these are not the only problems with the presentation: Section 4 mixes technical background, the paper’s first main contribution (IWSG), and an experiment; Section 5 mixes the paper’s second main contribution (MDPF) with related work and contains no mathematical description of the model. - The figures are weak for multiple reasons: (1) the font size is much too small; (2) the main text and the figure captions cannot be read independently, because the figures contain critical information that is not explained anywhere in the text; (3) the lack of subfigures ((a),(b),…) makes it difficult to map the information in the captions to individual subplots; and (4) the figures are not placed on the page they are referenced. Figure 4 is especially problematic as it describes the core of the paper’s contribution but is impossible to understand without further context (which the text does not provide either). - The paper’s weak presentation also makes it difficult to assess the two contributions (IWSG and MDPF): (1) l.170 mentions that the proposed IWSG estimator is a continuous variant of the discrete estimator used in [6], but the exact relationship between the two remains unclear. Are there non-trivial challenges that prevent a direct generalization of [6] to a continuous setting?; (2) l.89/l.209 mentions that the proposed MDPF is a variant of [14,15], but again the differences are not explained in enough detail. Are there non-trivial challenges that prevent a direct application of [14,15] in a discriminative setting? - One of the paper’s main claims is MDPF’s ability to compute unbiased and low-variance gradients. Unfortunately, neither the technical sections nor the experiments provide any *direct* evidence supporting this claim. I would have liked to see either a technical analysis of the proposed estimator's properties or experiments that go beyond the evaluation of NLL/RMSE and at least give empirical insights along those lines. Overall, I feel this paper would benefit from another round of polishing before publication, including a reworked presentation and better positioning and differentiation of its contributions. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Section 4: It is mentioned that the optimal transport methods described in [44,45] are compatible with Gaussian mixtures (l.154f), so I’m curious why they are not part of the evaluated methods? - Section 6: The design of the architecture used in the experiments (Sections 6.1-6.3) is not clear enough. Are all experiments based on Figure 4? If so, what are the encoders and transformations? - Section 6: One of the arguments against [5] is its use of truncated gradients (l.118f), however, l.244 mentions that the experiments use truncated BPTT. Doesn’t that bias the MDPF gradients as well and makes the arguments in favour of MDPF obsolete? - Section 6: I’m curious how the number of particles affects the reported performance metrics, and especially if the observed trends remain the same if the number of particles is significantly increased or decreased. If the authors have additional results along those lines it would be interesting to see them. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: - I commend the authors for including a paragraph on the limitations of the proposed particle filter, such as its supervised nature and computational complexity on high-dimensional data. - The paper does not discuss any ethical concerns related to the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough comments and helpful feedback, which we will use in future revisions. We concede that the presentation in the paper can be improved, and based on feedback we intend to move some technical details that are present in the supplement into the main paper text (see global response). In the meantime, we would like to point the reviewer to Appendix A of the supplement, which we believe addresses many of your concerns. Specifically, Appendix A contains information on prior works which clarifies how our IWSG extends work from [6], as well as the differences and similarities between our MDPF and classical regularized particle filters. Furthermore, in Appendix A.4 we give additional details on the general architecture/parameterization of MDPF and A-MPDF (details on encoders and transformations), and in Appendix D we give specific architectural details for all neural network components in MDPF and A-MDPF. With regard to optimal transport methods [44, 45] mentioned in the main paper text but for which we do not compare against: both [44] and [45] proposed gradient estimators focused on static distributions, and thus cannot be immediately applied to our particle filter context. The text citing these papers was unclear about this, we will revise. (Aspects of this work are also specialized to Gaussian mixtures, and cannot handle the von Mises kernels used in our experiments.) For this reason, we instead compare to the Entropy-Regularized Optimal-Transport [8] (OT-PF) method, which was an optimal transport method specifically engineered to integrate with particle filters. With regard to truncated Back-Propagation-Through-Time (BPTT) biasing MDPF, it is true that truncating gradients biases all methods (including MDPF) to reduce variance. The main contribution of MDPF is the ability to propagate gradients temporally across multiple timesteps (between the sparse truncation points) which baselines like TG-PF [5] are unable to do. With MDPF the BPTT truncations can be selected to balance bias and variance, as well as the computational requirements of training (frequent truncation is computationally cheaper, but more biased). TG-PF forces truncation at every time-step, and thus bias cannot be reduced even if additional computation is available. Unfortunately we did not run additional experiments with varying numbers of particles, as training multiple baseline methods for multiple trials (see box plots in Fig. 5,7) is already computationally demanding. But in the 1-page rebuttal figure PDF (Fig. 1), we do compare computational cost of the various methods with varying numbers of particles. [5] Rico Jonschkowski, Divyam Rastogi, and Oliver Brock. Differentiable particle filters: End-to-end learning with algorithmic priors. Proceedings of Robotics: Science and Systems (RSS), 2018. [6] Adam Scibior, Vaden Masrani, and Frank Wood. Differentiable particle filtering without modifying the forward pass. International Conference on Probabilistic Programming (PROBPROG), 2021. [8] Adrien Corenflos, James Thornton, George Deligiannidis, and Arnaud Doucet. Differentiable particle filtering via entropy-regularized optimal transport. International Conference on Machine Learning (ICML), 2021. [44] Martin Jankowiak and Fritz Obermeyer. Pathwise derivatives beyond the reparameterization trick. International Conference on Machine Learning (ICML), 2018. [45] Martin Jankowiak and Theofanis Karaletsos. Pathwise derivatives for multivariate distributions. International Conference on Artificial Intelligence and Statistics (AISTATS), 2019. --- Rebuttal Comment 1.1: Comment: I want to thank the authors for their thoughtful rebuttal, in particular regarding the paper’s contribution relative to [6] and the role of truncated BPTT in the experiments. I still feel it would be in the authors' best interest to let this paper go through another round of polishing, so that the interesting ideas are not obfuscated by the presentation. I have increased my rating to 5 in response to the clarifications in the rebuttal.
Summary: This paper discusses the limitations of traditional particle filters in representing multiple posterior modes and their applicability to high-dimensional observations like images. Instead, the authors propose a method that leverages training data to learn particle-based representations of uncertainty using deep neural network encoders. The authors address the issue of biased learning and heuristic relaxations by representing posteriors as continuous mixture densities, allowing for unbiased and low-variance gradient estimates. Strengths: - This paper discusses an interesting problem. - It is easy to read and follow the train of thought. - Literature review is sufficient to grasp the idea of the idea of the paper. Weaknesses: - As MDPF relies on DNN parameterizations for both dynamic and measurement models, this introduces significant complexity to the model which can make training and inference more computationally expensive. - Additional complexity of the model due to decoupling for A-MDPF as it requires careful tuning for separate bandwidth parameters. - While the paper mentions that bandwidth parameters can be learned through end-to-end training, optimizing this parameter efficiently can require careful tuning and thus is challenging. - I am still a bit uncertain about the authors' claim regarding the use of smaller NNs and non-learned operations in constructing the dynamic and measurement models. - Lack of comparison with alternative frameworks for tracking and robot localizations and mostly applying to synthetic data. - The limitations of the simulation setup in evaluating the MDPF and A-MDPF algorithms can be identified as follows: * The state estimation tasks focus on a 3D state consisting of translation and angle components (s = (x, y, θ)). While this simplification allows for manageable simulations, it may not capture the full complexity of state estimation in more challenging real-world scenarios with higher-dimensional state spaces. * The evaluation compares the MDPF and A-MDPF algorithms with a few selected particle filter variants (TG-PF, SR-PF, OT-PF, DIS-PF, C-PF) and an LSTM model. While this provides some insight into the performance of the proposed methods, a more comprehensive comparison with a wider range of state estimation algorithms would provide a better understanding of their strengths and weaknesses. * The training data uses sparsely labeled true states every 4th time-step, which may not accurately represent the labeling constraints in real-world applications. Dense labeling is crucial for accurately assessing the performance of state estimation algorithms. Although the evaluation uses densely labeled datasets, the training process may not fully exploit the benefits of dense labeling. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Questions regarding the proposed importance weighted samples gradient (IWSG) estimator: 1- How effective is the IWSG estimator in reducing variance compared to other gradient estimation techniques? 2- Are there scenarios or specific mixture distributions where the IWSG estimator may still suffer from high variance? 3- Have there been any empirical evaluations to quantify the variance reduction achieved by the IWSG estimator? 4- How does the choice of the proposal distribution, q(z), affect the accuracy and efficiency of the IWSG estimator? 5- What is the computational cost associated with using the IWSG estimator compared to other gradient estimation methods? Questions regarding the Algorithms: 1- How does the complexity of the deep neural networks used in MDPF and A-MDPF impact the computational requirements for training and inference? 2- Are there any strategies or techniques to improve the computational efficiency of these models? 3- How does the performance of MDPF and A-MDPF depend on the quality, size, and representativeness of the training data? 4- What happens when there is limited or biased training data available? 5- Can the learned parameters and decisions made by MDPF and A-MDPF be interpreted and understood? 6- Are there any techniques or approaches to enhance the interpretability of these models? 7- How does the performance of MDPF and A-MDPF depend on the choice and optimization of the bandwidth parameter (β) for kernel density estimation? 8- Are there any alternative methods or strategies to optimize this parameter effectively? 9- How does the decoupling of mixtures used for particle resampling and posterior state estimation affect the performance and training of A-MDPF? 10- Are there specific scenarios or domains where A-MDPF offers significant advantages over MDPF? 11- How well do MDPF and A-MDPF generalize to different problem domains or scenarios? 12- Have these models been tested on real-world datasets, and how do they perform compared to alternative approaches in those contexts? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Please refer to the weakness and question. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments and feedback. We address various comments and reviewer questions below. We would like to highlight Appendix A.4 (supplement) which gives additional details about the use of smaller neural networks and non-learned operations when constructing the dynamic and measurement models, and how this structure is motivated by particle filter mathematics. With regard to the computational cost of parameterizing the dynamics and measurement models by deep neural networks, we first note that MDPF makes no restrictions on the parameterization of either model other than differentiability (see Sec. 3). Hand-engineered parametric models with limited tunable parameters may be substituted for either model if deep neural networks are too computationally expensive, though the expressiveness and flexibility to match complex training data would be lost. With regard to the additional bandwidth parameters in our A-MDPF methods, we find that careful manual tuning of the bandwidth for MDPF and A-MDPF is not necessary. During end-to-end training, the bandwidth can be optimized simultaneously along with the dynamics and measurement models; this approach is successful in all of our experiments. IWSG Questions: 1. IWSG offers much lower variance gradient estimates compared to other estimators like IRG. We highlight this in Sec. 4 (main text) and Appendix B (supplement). 2. The only regime where IWSG has been observed to have high-variance is when the number of particles is extremely small (less than 20). 3. We empirically show the variance reduction in Fig. 3 (main text) and Fig. 2 (supplement), where it is clear that our IWSG estimator is much lower variance than IRG. 4. We frame IWSG as an estimator for estimating gradients for samples drawn from some mixture distribution $m(z)$. The proposal distribution is always the “current” mixture distribution $q(z) = m(z)$, before gradient-based perturbation of the mixture parameters. Thus, unlike some applications of importance sampling, there is no need to manually choose a proposal distribution. 5. Please see our global response to all reviewers, and the additional plot in our 1-page rebuttal PDF (Fig. 1). Algorithm Questions: 1,2. The majority of the computation time of MDPF is neural network evaluation, and thus more computationally intensive networks will require more compute time. MDPF gives no constraints on neural network architecture (other than they must be differentiable), as stated in Sec. 3 (main text). Thus, methods developed to increase computational efficiency of general neural networks can be incorporated into MDPF training. 3,4. The quality of the learned dynamics and measurement models is dependent on the quality available data. Our experiments show that training may be effective even when observations are temporally sparse, but of course, biased or insufficient data can lead to biased or inaccurate models. 5,6. The latent space (particles, weights, bandwidths) of MDPF and A-MDPF are defined by the training data, and are thus interpretable with real world meaning as highlighted in sections 2.1 and 5 (main text). The parameters of the neural networks are not directly interpretable, as is the case with most neural networks. MDPF does not place any restrictions (other than differentiability) on the models used, and therefore interpretable parameterizations could be used. 7,8. The performance of MDPF (and A-MDPF) is dependent on the choice of bandwidth parameter $\beta$ and thus we choose to set it as a learned parameter. Very-small bandwidths ($\beta << 0.001$) can lead to numeric instability in float32 systems, and very-large bandwidths lead to poor MDPF performance. Classical methods, with known drawbacks, for setting the bandwidth exist and are referenced in section 2.2. We find that learning the bandwidth end-to-end is simple and effective. 9 . Decoupling posterior and resampling mixtures as done in A-MDPF gives superior performance as shown in the results in section 6. Training A-MDPF requires additional computation (due to the additional networks) but a pre-trained MDPF can be used to initialize A-MDPF allowing for faster training. 10 . We find that in general A-MDPF offers performance increases over MDPF due to the flexibility provided by decoupling the posterior and resampling distributions. This can be seen in our results section 6 (specifically fig. 7). 11 . Our MDPF and A-MDPF learn problem-specific dynamics and measurement models, and are easily applied to any domain where appropriate data is available. Application to test data with altered dynamics/likelihoods, without first retraining, may lead to poor performance. 12 . Prior work in this area has mainly been evaluated on synthetic datasets with high-dimensional observations, and we take the same approach. We evaluate on the same datasets as methods we directly compare to [1-3]. The House3D data is fairly realistic and the 3D environments it is based on data taken from real-world homes, but the observation images are still rendered with computer graphics. Given the strong performance of MDPF, we believe experimentation with richer real-world datasets is a promising area for future research. [1] Adam Scibior, Vaden Masrani, and Frank Wood. “Differentiable particle filtering without modifying the forward pass”. International Conference on Probabilistic Programming (PROBPROG), 2021 [2] Peter Karkus, David Hsu, and Wee Sun Lee. “Particle filter networks with application to visual localization”. Conference on Robot Learning (CORL), 2018. [3] Adrien Corenflos, James Thornton, George Deligiannidis, and Arnaud Doucet. Differentiable particle filtering via entropy-regularized optimal transport. International Conference on Machine Learning (ICML), 2021. --- Rebuttal Comment 1.1: Comment: I'd like to thank the authors for their responses and clarifications. While I maintain my appreciation for their work, I believe it would greatly enhance the manuscript if the authors consider refining their content to address some of the concerns raised by the reviewers. I, however, keep my score as is.
Summary: The paper studies the problem of sequential state estimation using discriminative particle filters. Particle filters (or SMC) methods are widely used in this setting owing to their flexibility. Recently, inspired by the success of deep learning enabled by end-to-end differentiable methods, there has been interest in imparting similar properties to particle filters for improving performance, in particular when modelling temporally extended systems. The authors first review some existing approaches for particle filtering with a focus on regularized particle filters as well approaches for making the non-differentiable resampling step differentiable. The authors highlight several drawbacks with the existing methods with differentiable resampling, including biased gradients, numerical instability and high variance gradient estimates. The proposed method is based on leveraging implicit reparameterization gradients coupled with an importance sampling scheme to obtain an unbiased gradient estimator for continuous mixture distributions. The implicit reparameterization avoids the inversion of the standardization function and the importance sampling scheme reduces the variance of the estimates. The approach avoid various pitfalls of previous attempts to make the resampling step differentiable. The authors leverage this estimator in their Mixture Density Particle Filter method, which is evaluated through a wide variety of experiments with thorough quantitative and qualitative analysis - achieving improved performance across tasks. Strengths: * Sequential Monte Carlo approaches are widely used across various domains. Improvements to the algorithm can have a broad impact. The importance-weighted gradient estimator proposed in the paper can improve the performance on a wide-variety of tasks, beyond those studied in the paper. * The proposed approach is principled and novel, leveraging recent advances in implicit reparameterization along with classical insights from the sampling literature to improve the performance of regularized particle-filters. * The resulting method MDPF is conceptually simple, and can be incorporated easily as an alternative to existing regularized PF approaches. * The paper is quite well written with a nice review of existing approaches and a lot of useful details about the experiments. I do think the introduction can be improved a bit. Weaknesses: * A key weakness of the approach is scaling to higher dimensional problems. It is a well known problem that importance sampling based estimators can still have high variance induced by some bad samples with high weights. (This is already discussed in the paper) * While I appreciate the thorough experiments in the paper one issue is that most of the experiments are on relatively low dimensional problems. * Although the details of the experiments are covered fairly well, the absence of code could be a problem for reproducibility efforts. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * What would be potential solutions to scaling the method for higher dimensional problems? * Have you tried the approach on some standard SMC tasks [1]? Does the gradient estimator improve performance there? [1] An introduction to Sequential Monte Carlo by Nicolas Chopin and Omiros Papaspiliopoulos. https://particles-sequential-monte-carlo-in-python.readthedocs.io/en/latest/ Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The major limitations of the method are already highlighted in the paper - namely scaling to higher dimensional problems and reliance on labelled data. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your praise and feedback. We are grateful for the careful review of our work and appreciate your highlights to the broad applicability of our IWSG estimator beyond particle filtering and the simplicity and effectiveness of our MDPF method. Our intention is to release the code publicly (on Github) after some code cleanup and documentation. Regarding the application of particle filters in higher dimensions, we first note that our experimental domains are chosen to allow direct comparison to prior work, and are challenging in spite of their moderate dimension. Scaling particle filters to higher-dimensional states does generally require more particles, but the number of required particles is heavily dependent on the posterior uncertainty; latent spaces with high posterior entropy require more particles [2]. By making the dynamics and measurement models of the particle filter learnable and effectively training them via our MDPF, the posterior state uncertainty can be reduced, allowing for fewer particles to be used. We conjecture that learning high-quality dynamics and measurement models, instead of hand-crafting models that only coarsely approximate reality, will allow particle filters to more efficiently scale to higher-dimensional problems. As indirect evidence for this, we note that several of the baseline papers we cite used much-larger particle sets at test than during training, to compensate for poorly training models (due to biased gradients). In contrast, our MDPF is effective when the test particle set size is exactly matched to training. With regard to “standard” SMC tasks as in [1], these methods assume known dynamics and measurement models, and thus the gradient-estimation innovations that our MDPF focuses on are not needed. Our Bearings-Only Tracking Task is similar to the tracking problem shown in 2.4 of [1], but more challenging: the latent state has an additional dimension, and the dynamics and measurement models must be learned from training data (rather than being fixed to the true process used to generate synthetic data). [1] Nicolas Chopin and Omiros Papaspiliopoulos. “An introduction to Sequential Monte Carlo”. Springer 2020. ISBN 978-3-030-47844-5 [2] Dieter Fox. “KLD-Sampling: Adaptive Particle Filters”. Advances in Neural Information Processing Systems (NeurIPS), 2001. --- Rebuttal Comment 1.1: Title: Response to author rebuttal Comment: Thank you for the response (and apologies for my tardy response to the rebuttal). I appreciate the authors response to my concerns. I will maintain my score recommending acceptance.
Summary: The paper consider the problem of particle filter (PF)-based state estimation in nonlinear models with unknown dynamics and (discriminative) observation models. The key challenge they address is the typical inability of traditional gradient learning approaches, applied to PF, to deal with (backprop through) the non-differentiable particle resampling stage. To do this, the authors propose a novel Importance Weighted Samples Gradient (IWSG) Estimator, based on a kernel-density representation of the resampling step. The IWSG is fully differentiable but is also unbiased and, in practice, low variance, particularly compared to the previously proposed Implicit Reparameterization Gradients (IRG) estimator. The authors conduct an extensive set of experiments on simulated data (synthetic + simulator object-in-a-maze tracking scenarios) to demonstrate the utility of their IWSG. IWSG is shown to outperform the competing approaches. Strengths: + Well-written paper, with sufficient details (in main paper + supplement) that clearly introduce the problem, describe prior PF-based approaches, and build the proposed Mixture Density Particle Filter (MDPF) using a novel IWSG framework, following the ideas of Scibior et al., but using a kernel density-based (mixture density) resampler. + Addresses an important problem in the application of particle filter-based state estimators/trackers for models with unknown (parametric) dynamics and observation models (discriminative) + Propose two variants of MDPF, a baseline with a single mixture density used for both particle resampling and state estimation, and an improved A-MDPF, which employs different mixtures for the two tasks. A-MDPF is generally demonstrated to be more effective at the added computational cost of having two mixtures. + Extensive and convincing experiments conducted in simulated settings (both simple synthetic models and more complex 2D and 3D based simulators, c.f., Deepmaze and House3D with image based observations). Experiments demonstrate that IWSG leads to consistently more accurate state estimates compared to either prior works / baselines (LSTM, TG-PF, OT-PF, SR-PF, DIS-PF, C-PF, TG-MDPF, and IRG-MDPF), together with lower estimator variances. Weaknesses: - Main paper does not clearly build the connection between Scibior et al. and the proposed approach, but this is clarified in the rather extensive supplement. The proposed approach can be seen as perhaps rather obvious, once the KDE mixture model is employed for the resampler, but it is nevertheless a fairly ingenious combination of MD + IWSG. - The authors acknowledge that PF-based approaches are not effective for high-dimensional state spaces and/or when the number of particles is large. However, they do not clearly investigate this dependency (all experiments are conducted on max 3D state spaces + bearing). Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - How does the complexity of learning/inference depend on N? Do you have experimental evidence that demonstrates this? - Fig. 3 (supplement) shows some increasing variance trends in both gradients and states for MDPF (and A-MDPF). Can you comment on this / explain? Was this tendency observed in experiments in the simulator data? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The authors has addressed limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive comments and constructive feedback. We agree that some of the discussion of baseline methods in the main text is not sufficiently clear, and will shift material from Appendix A to the main paper to address this issue. Please see our global response to all reviewers for discussion of computational complexity, and the new plot in Figure 1 of the rebuttal pdf. While the computational cost of MDPF gradient computation with N particles scales as O(N^2) during training, the cost of MDPF resampling at inference/test time is O(N log(N)), comparable or better than baseline methods. With regard Fig. 3 (main paper) and Fig. 2 (supplement), the perceived increasing variance trends for both states and gradients of MDPF is a rendering artifact from using log scale on the horizontal axis. We have re-rendered the plot with linear scale (shown in the 1-page rebuttal figure PDF, figure 2) to show that the variance of our MDPF does not tend to increase. --- Rebuttal Comment 1.1: Comment: I am happy with the authors' rebuttal and will keep my score.
Rebuttal 1: Rebuttal: Thank you all for your feedback and helpful suggestions. We want to address a few topics that were referenced by multiple reviewers. First, for additional technical details about our methods as well as baselines, please refer to Appendix A of the supplement. When drafting the paper, we put these details in Appendix A to allow more space for experimental details. However, based on reviewer feedback, we now realize that this made it harder for readers to understand details of our mixture-density particle filter (MDPF) and gradient estimators. In future revisions, we will move key parts of Appendix A back to the main paper (with space made available by shifting some parts of Sec. 6 to the appendix). Several reviewers asked how the computational complexity of our MDPF scales with the number of particles N. Figure 1 in the 1-page rebuttal PDF shows empirical trends for all methods. At training time, the complexity of MDPF gradient computation is O(N^2) per time step: each of the N particles at the next time point could have been generated by any of the N mixture components at the previous time point, and the probabilities of these events must be computed. Note that this overhead is not substantial for moderate numbers of particles, because for models of complex domains like images, resampling is much faster than neural-network likelihood evaluations whose cost remains O(N). This extra computational cost is also key to avoiding the instability and poor performance of several baseline methods, which suffer from the “ancestor problem” (see Sec. 3). Note also that this MDPF gradient computation is far cheaper than optimal-transport methods for aligning particles across time. At inference or test time, gradient computation is not needed, and the MDPF has similar computational cost to “standard” particle filters. Discrete resampling of N particles from the categorical distribution of particle weights requires O(N log(N)) time [1,2], and then particles may be perturbed by Gaussian noise (with learned bandwidth) in O(N) time. This is in contrast with optimal-transport methods, which require expensive optimization at test as well as training. We are currently in the process of cleaning and documenting our implementation of the MDPF, and will release code on Github when the paper becomes publicly available. [1] https://en.wikipedia.org/wiki/Categorical_distribution#Sampling [2] https://en.wikipedia.org/wiki/Alias_method Pdf: /pdf/33e39f96eb347d1174041f61a928164e97012beb.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes an unbiased and low-variance gradient estimator for differentiable particle filters, where resampling steps incurs discrete changes that are non-differentiable in previous methods. The proposed method solved the problem by representing posteriors as continuous mixture densities, which is similar to the idea of regularized particle filters. Strengths: The paper is well-presented, motivation and proposed method are clearly explained, experimental results demonstrated the effectiveness of the proposed gradient estimator for differentiable particle filters, especially when the posterior is multimodal. Weaknesses: The performance of the proposed method when used in differnetiable particle filters trained with unsupervised loss when labelled data are not available is not discussed in the experiments/ Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. As far as I can see, all the experiments presented in the paper assumed the true latent state is available during training, do you have any idea about how to train differentiable particle filters when true states are not available in training but we still want to track the latent state in testing? 2. The paper claimed the proposed estimator is unbiased, but it is not obvious to me why it is unbiased and I expect to see a theoretical proof of this. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The proposed method is based on importance sampling, therefore some discussions on the variance of the estimator will improve the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments and feedback. To answer your specific questions: 1. Classical particle filters assume known dynamics and measurement models, and thus also implicitly assume the latent state has been manually defined; typically it has an interpretable real-world meaning such as the position or velocity of the system. Our MDPF extends the classical particle filter by allowing the dynamics and measurement models to be learned, training them end-to-end from (possibly sparse) observations. In general to allow prediction of latent states, the state space must be defined somehow, either by hand-crafting of models (as in classical particle filters) or by training data (as in our paper). 2. Importance sampling is a broadly-applicable Monte Carlo method that, under very mild assumptions, produces unbiased estimates of expectations for any number of samples [1]. In general, importance sampling estimates the expectation with respect to some distribution $p(x)$ by drawing samples from some other distribution $q(x)$: $E_{p(x)}\Big[f(x)\Big] = \int_{x} p(x) f(x) dx = \int_{x}q(x) \frac{p(x)}{q(x)} f(x) dx = E_{q(x)}\Big[\frac{p(x)}{q(x)}f(x)\Big]$. Our IWSG estimator is unbiased since it is derived from an unbiased importance-sampling estimator. We will revise Sec. 4.2 to explain this more clearly. [1] C. P. Robert and G. Casella, “Monte Carlo Statistical Methods”, 2004. --- Rebuttal Comment 1.1: Title: Response to author rebuttal Comment: Thanks for the authors' response, I will keep my score as it is now and recommend an acceptance.
null
null
null
null
null
null
QuantSR: Accurate Low-bit Quantization for Efficient Image Super-Resolution
Accept (spotlight)
Summary: This manuscript is devoted to pushing the super-resolution (SR) models to ultra-low bit-width (2-4 bits). It proposes two methods and pushes the bit-width to ultra-low 2/4 bits with little accuracy loss. Meanwhile, the proposed methods not only boost the accuracy performance but also reduces the parameters and computation. It is rather difficult since the fewer parameters bring limited representation capability. The methods are well-motivated and effective, clearly demonstrating their motivation. Strengths: There are several strengths here: (1) This work proposes two methods; the redistribution-driven learnable quantizer (RLQ) mainly helps to improve the accuracy performance with little computation overhead, while the Depth-dynamic quantized architecture (DQA) further reduces the computation and parameters by integrating structured pruning. It is an interesting attempt to combine learnable structured pruning with learnable low-bit quantization. (2) Results shown in tables and visualizations are impressive. Figure 1 compares several quantization methods, and QuantSR shows a dominant advantage that can be clearly recognized. Many recently proposed methods are listed and compared, while QuantSR achieves consistent performance under 4/2 bit-width. (3) First, the motivation is clear: SR models rely on expensive computational resources, and model quantization is an effective approach that can reduce the model parameters and computation. Second, the equations are correct, and notations are well demonstrated, and the figures are helpful in understanding. Moreover, the references are extensive and cited in the correct places. (4) Compared with existing quantized SR methods, QuantSR achieves higher accuracy performance under 2/4 bit-width, which is close to the full-precision methods. And the compression ratio shown in Table 3 also shows the potential of quantization for practical value. Both quantizing and block dropping save the parameters and operations while keeping the accuracy. It can promote the implementation of quantizing SR networks in real-world scenarios. Weaknesses: There are several weaknesses here: (1) The functions in RLQ should be further clarified. Firstly, during the forward propagation, RLQ includes the function \phi, but it is not clear whether it plays a role in the forward. The author needs to clarify its function and discuss whether it would reduce the inference efficiency of the quantized model. Secondly, I am confused about the three equations in the backward propagation. As I understand, the function \phi primarily affects x (the input to the quantization function) in the backward, as indicated in the first equation of Eq 6. I suggest the author further discuss the meaning of these Eqs. (2) The results among CNNs and transformers are strong, but the latter seems to lack comparison, I suggest authors implement SOTA quantized SR method to transformer-based SR networks and report their results, it would be significantly helpful to improve the experiment part. (3) The writing of the paper should be improved, there are grammatical errors in the existing manuscript, and some symbols are not clarified. The authors should correct them thoroughly in the next revision. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The authors should respond to the raised issues in the weakness part. (1) Here is a concern about the RLQ. As far as I know, the LSQ quantization method [1] has been proposed to learn quantization steps for better projecting the floating-point values to the integer space. So can you clarify the difference between the RLQ in QuantSR and the existing quantization method? [1] LEARNED STEP SIZE QUANTIZATION. ICLR, 2021. (2) I notice that there are more learnable parts introduced to QuantSR, including the learnable interval and mean-shifting parameters in RLQ, and the trainable shortcut scores in DQA. I want to know the training difficulty of the proposed methods—any training algorithms/tricks and detailed strategies that are used in the training framework. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: See weaknesses and questions parts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1a**: The functions in RLQ should be further clarified. …, but it is not clear whether it plays a role in the forward. The author needs to clarify its function and discuss whether it would reduce the inference efficiency of the quantized model. **A1a**: We clarify that the $\phi$ function can be accurately embedded in the quantization interval of RLQ without affecting the quantized result in forward, and introducing information into the gradient when deriving backwards. Therefore, the $\phi$ function **does not affect the inference speed** of the quantized SR model, but only improves the optimization during training. > **Q1b**: Secondly, I am confused about the three equations in the backward propagation. … **A1b**: We clarify that Eq. (6) presents the derivative functions of the learnable parameters in the backward propagation. These include the derivative function for the input $\mathbf x$ (weights $\mathbf w$ or activation $\mathbf a$) (first equation), and the derivative function for the zero point $\tau$ (second equation), and the derivative function with respect to the quantization interval $v$ (third equation). In these derivative functions, the influence of the embedding function $\phi$ is considered and thus introduces information into the gradient to improve training. > **Q2**: The results among CNNs and transformers are strong, but the latter seems to lack comparison, … **A2**: We follow the reviewer's suggestion to compare more quantized transformer-based SR models. Specifically, we demonstrate the results of 4-bit SwinIR and CAT quantized by DoReFa and PAMS in Table A2 (*Table 1 of the attached PDF*). Our QuantSR method is able to stably outperform existing methods on different transformer-based SR models, implying that the advantages of the proposed techniques of QuantSR are robust across different architectures. Table A3a: Quantitative results. SwinIR and CAT-R are used as full-precision backbones. | | | \#Bit | Set5 | | Set14 | | B100 | | Urban100 | | Manga109 | | |-----------------------------|-------------------------|-----------|-------|--------|-------|--------|-------|--------|-------|--------|-------|-------------| | Method | Scale | ($w$/$a$) | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | | Bicubic | $\times$4 | -/- | 28.42 | 0.8104 | 26.00 | 0.7027 | 25.96 | 0.6675 | 23.14 | 0.6577 | 24.89 | 0.7866 | | SwinIR\_S | $\times$4 | 32/32 | 32.44 | 0.8976 | 28.77 | 0.7858 | 27.69 | 0.7406 | 26.47 | 0.7980 | 30.92 | 0.9151 | | DoRaFa | $\times$4 | 4/4 | 29.65 | 0.8416 | 26.92 | 0.7383 | 26.53 | 0.6993 | 23.85 | 0.6957 | 26.15 | 0.8218 | | PAMS | $\times$4 | 4/4 | 31.52 | 0.8865 | 28.19 | 0.7727 | 27.33 | 0.7279 | 25.35 | 0.7620 | 29.24 | 0.8924 | | QuantSR-T | $\times$4 | 4/4 | 32.18 | 0.8941 | 28.63 | 0.7822 | 27.59 | 0.7367 | 26.11 | 0.7871 | 30.49 | 0.9087 | | Bicubic | $ \times$4 | -/- | 28.42 | 0.8104 | 26.00 | 0.7027 | 25.96 | 0.6675 | 23.14 | 0.6577 | 24.89 | 0.7866 | | CAT\_R | $\times$4 | 32/32 | 32.89 | 0.9044 | 29.13 | 0.7955 | 27.95 | 0.7500 | 27.62 | 0.8292 | 32.16 | 0.9269 | | PAMS | $\times$4 | 4/4 | 31.77 | 0.8885 | 28.33 | 0.7752 | 27.40 | 0.7295 | 25.55 | 0.7672 | 29.62 | 0.8966 | | QuantSR-T | $\times$4 | 4/4 | 32.31 | 0.8955 | 28.74 | 0.7836 | 27.61 | 0.7389 | 26.31 | 0.7896 | 30.69 | 0.9118 | > **Q3**: The writing of the paper should be improved, … **A3**: We will follow your suggestions to carefully refine our manuscript in the final revision. > **Q4**: … can you clarify the difference between the RLQ in QuantSR and the existing quantization method (LSQ [1])? **A4**: Compared with LSQ mentioned by the reviewer, our RLQ comprehensively improves the forward and backward of the quantizer to enhance the propagated information in the quantized SR network. Specifically, for forward propagation, RLQ not only learns the quantization interval but also the zero-point of the quantizer, allowing for significant diversification of the quantized parameter in the whole network. For backward propagation, the soft embedded function is to incorporate information reflecting the quantizer's actual behavior while ensuring optimization stability. These improvements make RLQ significantly outperform the existing quantizer. > **Q5**: … I want to know the training difficulty of the proposed methods—any training algorithms/tricks and detailed strategies that are used in the training framework. **A5**: The training of QuantSR exhibits stability and robustness. We provide a comprehensive explanation of our training pipeline and experimental setup in Sec 3.4 and 4.1, respectively. The training of all learnable parameters is uniformly performed using the loss function defined by Eq. (10), without any additional constraints. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: Thanks for the authors' reply. The rebuttal addressed my concerns well. After carefully reading other reviews and the rebuttals, I would keep my rating for the sufficient contributions and well-supported experiments. Although there are several quant methods in high-level tasks, while there are still few explorations on low-level tasks. Thus, this paper would be helpful for the future work in light-weight low level vision.
Summary: This paper proposes a novel quantization method for SR models, including two new technologies to improve the unit representation and architectural potential of quantized SR models and push low-bit SR models to full-precision performance. The results on CNN and Transformer models on SR tasks show that QuantSR achieves SOTA performance and exceeds existing SR quantization methods. The most significant point of this paper is that the dynamic quantized architecture of QuantSR pushes up the performance ceiling of quantized SR from a block-stacking perspective and allows flexible reasoning according to resources in actual deployment, leading to a new promising way to lossless quantized SR. Strengths: In this paper, the authors propose novel RLQ and DQA technologies to jointly improve quantized SR models from two orthogonal perspectives of the unit representation and architectural potential: a) RLQ is an effective technique. By introducing learnable parameters and redistribution functions in the forward and backward propagation of quantized computing units, the amount of information in the quantization training propagation process is significantly improved and more accurate, and the representation ability of the quantization unit is improved. b) DQA is more attractive to me. In addition to being effective, this is a novel attempt to improve the quantized SR model from an architectural perspective. QuantSR's dynamic quantization architecture pushes up the performance ceiling of quantized SR from a block-stacking perspective, thus providing a new promising way towards lossless quantized SR models. In addition, it also allows the quantized SR model to perform flexible reasoning according to the resources in the actual deployment, which means that the practicality of the quantized SR model is greatly enhanced. The experimental results are superior. QuantSR shows significant performance improvement under all bit widths and surpasses the existing quantized SR methods. This enables QuantSR to achieve a SOTA structure while being novel. The visualization results clearly show that the effect of the proposed technologies follows their motivation. Weaknesses: a) One notable issue is there are some places in the paper that should be revised and clarified, including but not limited to the following: - In the box of Redistribution in Fig.2, should the y-axis coordinate be PDF(a), that is, the probability density function of a - In the box of Learnable Quantizer in Fig.2, according to the formula Eq.5 in the paper, s should be corrected to vˆb - The gamma in Stage 2 in Fig.2 is undefined - Symbol a is defined repeatedly as activation or range in Eq.4 - According to the implementation in the article, the options of var may be actually 200% (32 blocks), 100% (16 blocks), and 50% (8 blocks) b) In RLQ, the ϕ function seems to be used as an estimator to replace STE, so why it needs to be used in the forward pass and not just backward? In addition, the authors also need to explain why the ϕ function adopts this shape instead of the existing soft quantization functions such as DSQ [1] and Bireal [2]. c) The author should compare more quantized transformer SR models, and existing experiments only show the QuantSR performance on this type of network, other quantized methods are also needed. And it is necessary to show the comparison of the parameters and calculation amount of SwinIR and SRResNet, including full precision and quantized versions, to make the comparison more fair and clear. [1] Gong R, Liu X, Jiang S, et al. Differentiable soft quantization: Bridging full-precision and low-bit neural networks, ICCV, 2019. [2] Liu Z, Wu B, Luo W, et al. Bi-real net: Enhancing the performance of 1-bit cnns with improved representational capability and advanced training algorithm, ECCV, 2018. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: See weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The author proposes an effective quantification method for SR tasks in this paper, and I suggest the author discuss the potential of QuantSR in more low-level vision tasks in more detail, which will make the contributions of the proposed method more significant and wide. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1**: One notable issue is there are some places in the paper that should be revised and clarified, including but not limited to the following: [...] **A1**: Thank you for pointing them out. We will carefully revise them in our final revision. > **Q2a**: In RLQ, …, so why it needs to be used in the forward pass and not just backward? A2a: We clarify that the $\phi$ function can be accurately embedded in the quantization interval of RLQ without affecting the quantized result and introducing information into the gradient when deriving backwards. Therefore, the $\phi$ function does not affect the inference speed of the quantized SR model, but only improves the optimization during training. > **Q2b**: In addition, … why the ϕ function adopts this shape instead of the existing soft quantization functions such as DSQ [1] and Bireal [2]. **A2b**: Compared to existing quantization methods that use soft functions, such as the tanh-based DSQ function [1] and quadratic-based function [2], the $\phi$ function in RLQ has a stable impact in once backward while the former two are more intense. This property allows the quantized SR model to update stably without collapsing and ultimately enhances the accuracy by introducing forward vectorized information into the gradients. The results in Table 4 demonstrate that QuantSR significantly outperforms other SR model quantized soft-function-based methods. > **Q3a**: The author should compare more quantized transformer SR models, and existing experiments only show the QuantSR performance on this type of network, other quantized methods are also needed. **A3a**: We follow the reviewer's suggestion to compare more quantized transformer-based SR models. Specifically, we demonstrate the results of 4-bit SwinIR and CAT quantized by DoReFa and PAMS in Table A3a (*Table 1 of the attached PDF*). Our QuantSR method is able to stably outperform existing methods on different transformer-based SR models, implying that the advantages of the proposed techniques of QuantSR are robust across different architectures. Table A3a: Quantitative results. SwinIR and CAT-R are used as full-precision backbones. | | | \#Bit | Set5 | | Set14 | | B100 | | Urban100 | | Manga109 | | |-----------------------------|-------------------------|-----------|-------|--------|-------|--------|-------|--------|-------|--------|-------|-------------| | Method | Scale | ($w$/$a$) | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | | Bicubic | $\times$4 | -/- | 28.42 | 0.8104 | 26.00 | 0.7027 | 25.96 | 0.6675 | 23.14 | 0.6577 | 24.89 | 0.7866 | | SwinIR\_S | $\times$4 | 32/32 | 32.44 | 0.8976 | 28.77 | 0.7858 | 27.69 | 0.7406 | 26.47 | 0.7980 | 30.92 | 0.9151 | | DoRaFa | $\times$4 | 4/4 | 29.65 | 0.8416 | 26.92 | 0.7383 | 26.53 | 0.6993 | 23.85 | 0.6957 | 26.15 | 0.8218 | | PAMS | $\times$4 | 4/4 | 31.52 | 0.8865 | 28.19 | 0.7727 | 27.33 | 0.7279 | 25.35 | 0.7620 | 29.24 | 0.8924 | | QuantSR-T | $\times$4 | 4/4 | 32.18 | 0.8941 | 28.63 | 0.7822 | 27.59 | 0.7367 | 26.11 | 0.7871 | 30.49 | 0.9087 | | Bicubic | $ \times$4 | -/- | 28.42 | 0.8104 | 26.00 | 0.7027 | 25.96 | 0.6675 | 23.14 | 0.6577 | 24.89 | 0.7866 | | CAT\_R | $\times$4 | 32/32 | 32.89 | 0.9044 | 29.13 | 0.7955 | 27.95 | 0.7500 | 27.62 | 0.8292 | 32.16 | 0.9269 | | PAMS | $\times$4 | 4/4 | 31.77 | 0.8885 | 28.33 | 0.7752 | 27.40 | 0.7295 | 25.55 | 0.7672 | 29.62 | 0.8966 | | QuantSR-T | $\times$4 | 4/4 | 32.31 | 0.8955 | 28.74 | 0.7836 | 27.61 | 0.7389 | 26.31 | 0.7896 | 30.69 | 0.9118 | > **Q3b**: And it is necessary to show the comparison of the parameters and calculation amount of SwinIR and SRResNet, including full precision and quantized versions, to make the comparison more fair and clear. **A3b**: We further present the FLOPs and storage of the full-precision and quantized transformer-based SwinIR models, and compare them with SRResNet and QuantSR-C. As shown in Table A3b, QuantSR can also bring significant speedup and compression to transformer-based models, which shows that QuantSR brings general efficiency improvements on different architectures. Table A3b. Comparison of the number of parameters and operations. | Method | \#Block | \#Bit($w$/$a$) | \#Params (M) | Ops (G) | |-------------|-----------|----------------|--------------|-----------| | SRResNet | 16 | 32/32 | 1.37 | 90.1 | | QuantSR-C | 16 | 4/4 | 0.30 | 20.2 | | SwinIR-S | 4 | 32/32 | 0.88 | 195.6 | | QuantSR-T | 4 | 4/4 | 0.20 | 43.4 | --- Rebuttal Comment 1.1: Title: Thank you for addressing my comments Comment: I appreciate your efforts in conducting additional experiments and incorporating my suggestions. The paper was a pleasure to read and I am maintaining my score of 7. Thank you.
Summary: The paper proposes a new quantization scheme for super-resolution (SR) networks. The paper starts with a claim that weight quantization results in the homogeneity of parameters, leading to the loss of gradient information during backward pass. To resolve the claimed issue, the paper introduces Redistribution-driven Learnable Quantizer (RLQ) to reduce homogenization that is caused by discretization. RLQ is implemented by redistributable and learnable quantizer that improves the quality of information during both forward and backward passes, resulting in diverse representation, without additional inference overhead. The paper further introduces a Depth-dynamic Quantized Architecture (DQA) to better handle the performance-efficiency trade-off through learnable short-cut connections and weight sharing from different networks. Strengths: - The paper is easy to read and well-written - The paper introduces a new method RLQ to handle weight homogeneity issues, without additional inference overhead. - The paper introduces DQA to better control the trade-off between performance and efficiency. Weaknesses: - Missing discussions and quantitative comparisons against related works. What is the major difference between DQA and DropConnect [A] and AIG [B]? Also, why limiting to variants 100%, 50%, and 25% in comparison to these two related works? --- - The paper claims that there exists weight homogeneity due to quantization, which the proposed method mitigates. However, the paper does not present the demonstration (neither statistics nor visualization) of the extent of weight homogeneity before and after the proposed quantization scheme. --- - Can the authors provide intuitive explanation as to how the proposed method mitigates the weight homogeneity issues? Isn't the proposed method still susceptible to weight homogeneity, due to the limited possible number of values each weight can take? I'm guessing the learnable mean-shifting and quantization scale parameters are optimized to diversify weight values in the given possible values each weight can take. But, I can't see how the objective function and the current design can guarantee such diversification. More intuitive and detailed explanations could be helpful. --- - Can the authors provide intuitive explanation as to how the proposed method mitigates the weight homogeneity issues? Isn't the proposed method still susceptible to weight homogeneity, due to the limited possible number of values each weight can take? --- - Last but not least, there is no latency/bit operation efficiency comparisons against related works, such as PAMS. --- [A] Regularization of Neural Networks using DropConnect \ [B] Convolutional Networks with Adaptive Inference Graphs Technical Quality: 3 good Clarity: 3 good Questions for Authors: Refer to weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have addressed the limitations in the supplementary section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1**: Missing discussions and quantitative comparisons against related works. What is the major difference between DQA and DropConnect [A] and AIG [B]? Also, why limiting to variants 100%, 50%, and 25% in comparison to these two related works? **A1**: We will incorporate the related work suggested by the reviewer. Compared to these dynamic networks, DQA employs low-bit parameters and undergoes quantization-aware training. It is more significant that DQA is mainly devoted to utilizing a dynamic architecture to break through the precision limitation of quantized models. Benefiting from screening from a stronger initial SR model with more stacked quantized blocks, the upper boundary of the model performance with the original block size is shattered, resulting in enhanced accuracy. This means that the architecture not only allows quantized SR networks to achieve dynamic trade-offs during inference but also significantly improves the accuracy at the same model size. Due to the joint training of various variants, we just use three variants for QuantSR to control the training costs. For example, for 16-block SRResNet, QuantSR simultaneously trains weight-shared quantized variants with 32, 16, and 8 blocks, where the first and last one strike a tilt in accuracy and efficiency, respectively. It is worth noting that our method, based on the stacking of blocks, thus also allows for a more fine-grained selection of variants with varying the number of blocks based on specific requirements. > **Q2**: The paper claims that there exists weight homogeneity due to quantization, which the proposed method mitigates. However, the paper does not present the demonstration (neither statistics nor visualization) of the extent of weight homogeneity before and after the proposed quantization scheme. **A2**: Our RLQ achieves diversification of quantizers throughout the network during training by utilizing learnable intervals and zero points. Fig. 6(a) in the manuscript demonstrates that the learnable parameters of the weight quantizers gradually diversify with increasing training epochs. This indicates that the discretized mapping of weights in different layers starts to vary, resulting in diversified quantized weights across the entire network. Following your suggestion, we further present a comparison between visualizations of quantized weights with and without trainable parameters (see *Fig. 1 in the attached PDF*). As observed, the presence of learnable parameters in QuantSR leads to a progressive diversification of quantizers, consequently resulting in gradually diversified quantized weights across the entire network. In the quantized SR network without learnable parameters, the weights are more homogenized during training, and their distribution remains nearly unchanged. This illustrates the significant role of RLQ in addressing weight homogeneity. Overall, these findings demonstrate the effectiveness of RLQ in promoting diversity among quantizers and mitigating weight homogenization in the quantization process. > **Q3**: Can the authors provide an intuitive explanation as to how the proposed method mitigates the weight homogeneity issues? … **A3**: Our RLQ achieves the diversification of the quantizer by utilizing learnable intervals and zero points, guiding the diversified expression of quantized weights throughout QuantSR. While the quantization interval for each quantizer remains unchanged (e.g., 2^4 for 4-bit), the quantized weights exhibit diverse trends due to different quantizers learning distinct mappings during the training process. Detailed discussions and visualizations are presented in A2. Furthermore, it is worth noting that our RLQ does not require additional constraints and can be directly optimized using the original loss function. Since discretization leads to the degradation of pixel-level information in the SR network, introducing learnable parameters into the quantizer naturally promotes the diversification of both weights and quantizers without the need for extra objectives. This point is demonstrated by the visualizations in Fig. 6 of our original paper and *Fig. 1 in the attached PDF*. > **Q4**: Last but not least, there is no latency/bit operation efficiency comparisons against related works, such as PAMS. **A4**: We will compare the efficiency with other methods in the paper as you suggested. Since our techniques do not introduce any additional computation on inference, QuantSR has the same efficiency as other quantized SR models with significantly improved accuracy under the architecture with the same number of blocks. We present the comparison in Table A4. Table A4. Comparison of the number of parameters and operations. | Method | \#Block | \#Bit($w$/$a$) | \#Params (M) | Ops (G) | |-------------|-----------|----------------|--------------|-----------| | SRResNet | 16 | 32/32 | 1.37 | 90.1 | | DoReFa | 16 | 4/4 | 0.30 | 20.2 | | DSQ | 16 | 4/4 | 0.30 | 20.2 | | Bi-Real | 16 | 4/4 | 0.30 | 20.2 | | PAMS | 16 | 4/4 | 0.30 | 20.2 | | CADyQ | 16 | 4/4 | 0.30 | 20.2 | | QuantSR-C | 16 | 4/4 | 0.30 | 20.2 | --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the rebuttal. Some of my concerns still remain after carefully reading the rebuttal and going over the paper and related works again. 1. Efficiency: I'm interested in seeing inference latency comparisons as noted in the original review. I'm curious as to how the latency compares against baselines to actually observe if there is overhead or not. 2. Related work performance: - One of recent works (with SOTA performance) is not included: - Dynamic Dual Trainable Bounds for Ultra-low Precision Super-Resolution Networks, ECCV 2022. - Can authors provide discussions and performance comparisons against the work? - There are huge discrepancies between the reported performance of baselines for SRResNet provided in the paper and DDTB (related work mentioned above). Why is that? - How did authors reproduce results of CADyQ with fixed bit width for weights and activations, when CADyQ dynamically sets bits for each image? --- Reply to Comment 1.1.1: Comment: We deeply appreciate the reviewer's feedback and answer the questions as follows: > **Q1**: Efficiency: I'm interested in seeing inference latency comparisons as noted in the original review. I'm curious as to how the latency compares against baselines to actually observe if there is overhead or not. **A1**: We clarify that the latency of the quantized SR models on real deployment depends on the specific deployment library and hardware [1] (e.g., the INT8 quantized model has about 3x speedup compared to the FP32 counterpart on batch 1 setting on Tesla T4 GPU with TensorRT). It is hard to implement and deploy all quantized SR models on real hardware in such a limited time, while we plan to evaluate their latency when deployed in future work following the reviewer's suggestion. Moreover, we highlight that under the same architecture, QuantSR does not bring additional computational burden. As Table A4 shows in our last response, a comprehensive comparison of computational FLOPs reveals that our proposed QuantSR did not bring additional FLOPs while bringing improved accuracy. Therefore, on real deployment, the latency gap with other quantitative SR models may be minor. [1] MQBench: Towards Reproducible and Deployable Model Quantization Benchmark. Gong, et al. NeurIPS, 2021. > **Q2a**: Related work performance: One of recent works (with SOTA performance) is not included: DDTB. Can authors provide discussions and performance comparisons against the work? **A2a**: We compare DDBT in Table A2a and Table 2 of the attached PDF, and also discuss and show that the RLQ in our QuantSR can well solve the high-dynamic activation problem proposed by [2] (as in A7 for Reviewer G4U2). We will include relevant comparisons and discussions in our next version. As we demonstrate in the following Table A2a and Table 2 of the attached PDF, QuantSR consistently outperforms the DDTB method on various datasets. Moreover, in our QuantSR, the learnable parameters in RLQ effectively tackle the quantization problem of high dynamic activations in SR networks. Since the interval and zero-point of RLQ are learnable, it can gradually adapt to diverse and asymmetric input distributions. This implies that it can handle high dynamic activations better than statistics-based quantizers. As shown in Fig. 6(b), with increasing training, the offset of the activation quantizer in the network becomes more diverse, confirming the activation in the SR network is highly dynamic and demonstrating the ability of RLQ to adapt and cope well with such cases during training. [2] Dynamic Dual Trainable Bounds for Ultra-low Precision Super-Resolution Networks. Zhong, et al. ECCV, 2022. Table A2a. Quantitative results (4-bit). SRResNet is used as full-precision backbones. | | | \#Bit | Set5 | | Set14 | | B100 | | Urban100 | | Manga109 | | |--|--|--|--|--|--|--|--|--|--|--|--|--| | Method | Scale | ($w$/$a$) | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | | DDTB | $\times$2 | 4/4 | 37.78 | 0.9600 | 33.32 | 0.9160 | 32.03 | 0.8980 | 31.40 | 0.9210 | -- | -- | | QuantSR-C | $\times$2 | 4/4 | 37.80 | 0.9597 | 33.35 | 0.9158 | 32.04 | 0.8979 | 31.46 | 0.9221 | 38.25 | 0.9762 | | DDTB | $\times$4 | 4/4 | 31.97 | 0.8920 | 28.46 | 0.7780 | 27.48 | 0.7330 | 25.77 | 0.7760 | -- | -- | | QuantSR-C | $\times$4 | 4/4 | 32.00 | 0.8924 | 28.50 | 0.7799 | 27.52 | 0.7342 | 25.88 | 0.7807 | 30.15 | 0.9040 | > **Q2b**: The performance of baselines for SRResNet provided in the paper and DDTB … is very inconsistent. Why is that? **A2b**: We clarify that we follow the results of full-precision 32-bit SRResNet reported in [3] because these recent results fully include the results on the Set5, Set14, B100, Urban100, and Manga109 datasets for fair comparison (while PAMS [4] and DDTB [2] missing Manga109). As we mentioned in Sec 4.1 of our paper, we follow the results and settings reported by existing work to the greatest extent, and the training settings for each quantized SR method are exactly the same to achieve a fair comparison. [3] Basic Binary Convolution Unit for Binarized Image Restoration Network. Xia, et al. ICLR, [4] PAMS: Quantized Super-Resolution via Parameterized Max Scale. Li, et al. ECCV, 2020. > **Q2c**: How did authors reproduce results of CADyQ with fixed bit width … ? **A2c**: As mentioned in our paper Sec 4.1, our implementation of CADyQ completely follows the official GitHub repository (Cheeun/CADyQ). To achieve the comparison on a fixed bit-width quantization, our key yet simple change is to make the three candidates in the search space of CADyQ the same (such as in the "train_edsrbaseline_cadyq.sh" file, under 8-bit settings --search_space is 8+8+8). Then we were able to achieve a fair comparison between different methods at the same bit width.
Summary: This paper propose a new quantization method for single image super resolution, including Redistribution-driven Learnable Quantizer (RLQ) and Depth-dynamic Quantized Architecture (DQA). The previous one diversifies the representation and gradient information of quantized values by redistribution in quantizers, the last one improve the performance of quantized SR and achieves resource adaptation in inference. Extensive experiments demonstate the superiority of QuantSR. Strengths: The paper is well-written and easy to follow. The analysis of quantization for performance degradation is quite motivating and the proposed method is well-designed based on the analysis. The experiments show that QuantSR is better than previous methods. Weaknesses: 1. The two main reasons for this performance degradation is much common, and not specific to super resolution task. Thus the method proposed in this paper is not SR-optimal. 2. Depth-dynamic Quantized Architecture seems not much related with quantization, which is unneccessary. 3. The designing of mapping method in Equation~5 is unclear, and is much similar with previous quantization method for image classifacation. 4. Learnable quantizer and soft gradient is not new for quantization. 5. Loss the latest baseline, DDTB[3]. [1] Zhou S, Wu Y, Ni Z, et al. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients[J]. arXiv preprint arXiv:1606.06160, 2016. [2] Yamamoto K. Learnable companding quantization for accurate low-bit neural networks[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021: 5029-5038. [3] Zhong Y, Lin M, Li X, et al. Dynamic dual trainable bounds for ultra-low precision super-resolution networks[C]//European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022: 1-18. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. what is the theory of mapping method in Equation~5, or which function that inspires you? 2. The true reason that cause the degradation is the high-dynamic activation [1][2], Can QuantSR solve this problem as well? 3. Depth-dynamic Quantized Architecture is similar with dynamic network, could show more comparison with previous methods. [1] Tu Z, Hu J, Chen H, et al. Toward Accurate Post-Training Quantization for Image Super Resolution[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 5856-5865. [2] Zhong Y, Lin M, Li X, et al. Dynamic dual trainable bounds for ultra-low precision super-resolution networks[C]//European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022: 1-18. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Th authors addressed the limitations in this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1**: The two main reasons for this performance degradation is much common, ... Thus the method proposed in this paper is not SR-optimal. **A1**: We clarify that in this work, the techniques in QuantSR focus on the SR task since it possesses certain attributes not found in other high-level tasks, such as the retention of pixel-level representations, making the quantizing SR models challenging. However, despite our techniques being designed with SR in mind, they also have the potential to be applicable to other tasks. We intend to investigate this in future work. > **Q2**: DQA seems not much related with quantization, which is unneccessary. **A2**: We clarify that the DQA in QuantSR focuses on improving the accuracy of the quantized SR network by breaking the accuracy limit of their full-precision counterparts. As we discussed in Sec 3.1, in previous methods, the full-precision counterpart is commonly assumed to represent the performance upper limit of the quantized model. Our DQA breaks through this limitation through dynamic architecture and training strategies. By applying selective quantized blocks from the 2x-size architecture, the performance upper limit of QuantSR far surpasses that of directly training the same architecture. The weight sharing among variants also enables a versatile trade-off between accuracy and efficiency during inference. > **Q3**: The designing of mapping method in Equation~5 is unclear, and is much similar with previous quantization method for image classifacation. **A3**: As mentioned in Sec 3.2, the purpose of $\phi$ is to incorporate information reflecting the quantizer's actual behavior while ensuring optimization stability. In Eq. (5), our $\phi$ function is obtained by applying the $tanh(x)$ transformation and embedding it within each quantization interval. Specifically, in the interval $x \in [0, 1]$, the function is $\phi(x) = \frac{tanh(2x−1)}{tanh(1)}+0.5$, and the shape is repeated in other intervals. Compared to existing quantization methods that use soft functions, such as the tanh-based DSQ [1] and quadratic function [2], the $\phi$ function in RLQ has a stable impact in once backward while the former two are more intense. This property allows the quantized SR model to update stably without collapsing and ultimately enhances the accuracy by introducing forward vectorized information into the gradients. The results in *Table 2 in the attached PDF* demonstrate that QuantSR significantly outperforms other SR model quantized soft-function-based methods. [1] Gong R, et al. Differentiable soft quantization: Bridging full-precision and low-bit neural networks. ICCV, 2019. [2] Liu Z, et al. Bi-real net: Enhancing the performance of 1-bit cnns with improved representational capability and advanced training algorithm. ECCV, 2018. > **Q4**: Learnable quantizer and soft gradient is not new for quantization. **A4**: Please note that the techniques in QuantSR are specifically designed to address the performance degradation of the quantized SR model and are different from existing quantization methods, as described next. For the learnable quantizer: Since the SR task focuses on reconstructing images at the pixel level, making it crucial to preserve as much pixel-level information as possible during quantization. Previous quantized SR models using statistical quantizers lead to inaccurate information and significantly degraded precision. The forward mapping of our RLQ allows for flexible adjustments and diversification across the network, achieved by utilizing learnable intervals and zeros in quantizers. For soft gradient: As we respond in A3, the purpose of $\phi$ is to incorporate information reflecting the actual behavior of quantizers while ensuring optimization stability, and it makes QuantSR significantly outperform existing quantization methods using soft gradient approximation. > **Q5**: Loss the latest baseline, DDTB[3]. **A5**: We will compare DDTB in the paper as the reviewer suggests. As we demonstrate in *Table 2 of the attached PDF*, QuantSR consistently outperforms the DDTB method on various datasets. > **Q6**: what is the theory of mapping method in Equation~5, or which function that inspires you? **A6**: Regarding the mapping method of Eq. (5) in RLQ, it offers valuable gradient-guided information. Specifically, within each quantization interval, the gradient becomes smaller as the distance from the center of the region increases (L181 in our paper). This characteristic allows RLQ to incorporate information that accurately reflects the behavior of quantizers while maintaining optimization stability. Furthermore, in our response A3 above, we conduct a comparative analysis between RLQ and other existing soft quantization methods. > **Q7**: The true reason that cause the degradation is the high-dynamic activation [1][2], Can QuantSR solve this problem as well? **A7**: In our QuantSR, the learnable parameters in RLQ effectively tackle the quantization problem of high dynamic activations in SR networks. Relevant discussions on this approach will be included in the paper. Since the interval and zero-point of RLQ are learnable, it can gradually adapt to diverse and asymmetric input distributions. This implies that it can handle high dynamic activations better than statistics-based quantizers. As shown in Fig. 6(b), with increasing training, the offset of the activation quantizer in the network becomes more diverse, confirming the activation in the SR network is highly dynamic and demonstrating the ability of RLQ to adapt and cope well with such cases during training. > **Q8**: DQA is similar with dynamic network, could show more comparison with previous methods. **A8**: Our DQA is devoted to utilizing a dynamic architecture to break through the accuracy limitation of quantized models and also achieves dynamic trade-offs during inference. We will incorporate more related work about dynamic networks in the final version. --- Rebuttal Comment 1.1: Title: Response to the rebuttal. Comment: Thank you for the clear and convincing rebuttal. All my concerns are well addressed. This work is good enough to be published, I have raised my score.
Rebuttal 1: Rebuttal: We appreciate all reviewers for the constructive reviews and positive feedback to our QuantSR. Your expertise and insightful comments help us to further improve our paper. The attached PDF includes: * Figure 1: Visual comparison of weights in QuantSR with and without learnable quantizer parameters. * Table 1: Quantitative results on transformer-based SwinIR-S and CAT-R architectures (4-bit, 4$\times$ scale) * Table 2: Quantitative results on CNN-based SRResNet architectures (4-bit, 4$\times$ scale) For detailed instructions, please see our responses to each reviewer. Pdf: /pdf/1c653f7412a4642717e4cf18fcba5170d123cf03.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper propose an low-bit quantization method for efficient image super-resolution by introduce a Redistribution-driven Learnable Quantizer (RLQ). Besides, the proposed Depth-dynamic Quantized Architecture (DQA) allows for the trade-off between efficiency and accuracy during inference through weight sharing. Experiments show that the proposed method outperforms existing state-of-the-art quantized SR networks in terms of accuracy while also providing more competitive computational efficiency. Strengths: - **Originality:** This paper propose a novel accurate quantization scheme for efficient image SR by introducing Redistribution-driven Learnable Quantizer (RLQ) and Depth-dynamic Quantized Architecture (DQA). The proposed scheme can be applied into both CNN- and Transformer- based SR networks. - **Quality:** This paper is well organized and easy to follow. Weaknesses: - Motivation is unclear: Although the proposed method outperforms existing state-of-the-art quantized SR networks in terms of trade-off between accuracy and complexity, the contribution is still limited. If we only look at the method section, it seems that the proposed module can be applied to Low-bit Quantization of model compression for various tasks. Are there any special designs for SR tasks? If so, it is suggested to provide more discussion to highlight the contributions. - As claimed in Introduction section, "existing SR models rely on expensive computational resources, which significantly limits the real-world SR applications on resource-constrained edge devices. Therefore, there is an urgent requirement to develop model compression techniques for SR models to reduce the computational overhead". As far as I know, the low complexity SR model is indeed an important research task, but the model compression is only one of them. In fact, many lightweight SR algorithms have been proposed recently, such as look-up table based method, *e.g.,* SR-LUT (CVPR2021), MuLUT(ECCV2022). It is suggested to discuss the superior of the proposed method. - The proposed method is evaluated only on two classical SR networks, SRResNet (CVPR'17) for CNN-based and SwinIR_S (ICCVW'21) for Transform-based backbones, respectively. Can the proposed method be extended to the recently published SR models? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to comments in the **weakness** part. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Not applicable Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1**: … Are there any special designs for SR tasks? If so, it is suggested to provide more discussion to highlight the contributions. **A1**: We clarify that in this work, the techniques in QuantSR focus on the SR task since it possesses certain attributes not found in other high-level tasks, such as the retention of pixel-level representations, making the quantizing SR models challenging. However, despite our techniques being designed with SR in mind, they also have the potential to be applicable to other tasks. We intend to investigate this in future work. Specifically, the Redistribution-driven Learnable Quantizer (RLQ) targets the preservation of fine-grained pixel information essential for SR tasks. The information recovery of RLQ for quantized SR networks is accomplished through forward and backward propagation by utilizing learnable quantizer parameters and the embedded soft function, respectively. And the Depth-dynamic Quantized Architecture (DQA) leverages the blockwise characteristics of the quantized SR model, effectively pushing the boundaries of quantization performance. Benefiting from screening from a stronger initial SR model with more stacked quantized blocks, the upper limit of the model accuracy is shattered. Both RLQ and DQA operate independently but synergistically, collectively enhancing the performance of quantized SR models. In addition, we keep the general nature of the proposed techniques, which are applicable to a variety of SR architectures, including CNNs and transformers. > **Q2**: … many lightweight SR algorithms have been proposed recently, such as look-up table based method, e.g., SR-LUT (CVPR2021), MuLUT(ECCV2022). It is suggested to discuss the superior of the proposed method. **A2**: We will follow the reviewer's suggestions to discuss the mentioned works. Currently, numerous researchers are devoted to achieving lightweight quantization algorithms, such as SR-LUT and MuLUT, which innovatively utilize lookup table-based methods to achieve fast inference and good reasoning speed. However, they still exhibit a significant gap in accuracy compared to advanced SR models based on deep networks. For instance,the PSNR indicator for the 4x Set5 dataset: MuLUT-SDY-X2 30.60 vs. 4-bit QuantSR-T 32.18. Our proposed QuantSR approach aims to bridge the gap between quantized SR models and their full-precision versions, bringing its performance close to that of their full-precision counterparts and surpassing existing methods, including both quantized and lookup-table-based SR models. Additionally, since the quantizer in QuantSR works at a general operator level, it has the potential to further accelerate convolutional units in lookup-table-based SR models, maintaining performance while pushing their efficiency limits further. > **Q3**: … Can the proposed method be extended to the recently published SR models? **A3**: Yes, thanks to the general nature of our QuantSR, it can be flexibly applied to various types of SR models. In this study, we utilize our QuantSR to quantize the CAT model under 4-bit and present the relevant results in Table A3 (part of *Table 1 in the attached PDF*). The findings demonstrate that QuantSR significantly enhances the model’s performance compared to PAMS, the currently existing state-of-the-art quantized SR model. This outcome illustrates the robust improvements our proposed method brings to the quantization of various SR architectures. Table A3: Quantitative results. CAT-R [1] is used as full-precision backbones. | | | \#Bit | Set5 | | Set14 | | B100 | | Urban100 | | Manga109 | | |-----------------------------|-------------------------|-----------|-------|--------|-------|--------|-------|--------|-------|--------|-------|-------------| | Method | Scale | ($w$/$a$) | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | | Bicubic | $ \times$4 | -/- | 28.42 | 0.8104 | 26.00 | 0.7027 | 25.96 | 0.6675 | 23.14 | 0.6577 | 24.89 | 0.7866 | | CAT\_R | $\times$4 | 32/32 | 32.89 | 0.9044 | 29.13 | 0.7955 | 27.95 | 0.7500 | 27.62 | 0.8292 | 32.16 | 0.9269 | | PAMS | $\times$4 | 4/4 | 31.77 | 0.8885 | 28.33 | 0.7752 | 27.40 | 0.7295 | 25.55 | 0.7672 | 29.62 | 0.8966 | | QuantSR-T | $\times$4 | 4/4 | 32.31 | 0.8955 | 28.74 | 0.7836 | 27.61 | 0.7389 | 26.31 | 0.7896 | 30.69 | 0.9118 | [1] Chen et al. Cross Aggregation Transformer for Image Restoration. NeurIPS, 2022. --- Rebuttal Comment 1.1: Comment: Thank you for the clear and convincing rebuttal. All my concerns are well addressed. I will raised my score.
null
null
null
null
null
null
Mitigating Over-smoothing in Transformers via Regularized Nonlocal Functionals
Accept (poster)
Summary: Recent research works have revealed that the over-smoothing issue, a prevalent challenge in Graph Neural Networks, similarly plagues Transformers. Contrary to expectations, the performance of a Transformer model does not invariably improve with increased depth of the self-attention layers. In fact, a deeply layered Transformer may not necessarily outperform a shallow one. This paper elucidates that the over-smoothing phenomenon occurs due to a consistent decrease in the non-local function of weighted differences in each self-attention layer as depth increases. Drawing on this finding, the paper puts forward a solution featuring modified skip connections. While similar solutions have been proposed in prior research, this paper sets itself apart by providing an in-depth theoretical analysis from a gradient flow perspective, which previous works have not addressed. Furthermore, the experimental results corroborate the effectiveness of the proposed solution in mitigating over-smoothing in Transformers. Strengths: 1. The theoretical investigation conducted in this paper is rigor and in-depth. Weaknesses: 1. The theoretical explication put forth in this paper applies accurately only to Transformers with single-head self-attention mechanisms. However, given that the prevailing Transformers utilize multi-head self-attention mechanisms and feed-forward networks, it becomes prudent to critically evaluate the efficacy of the resolution proposed herein. 2. This paper lacks a comprehensive set of comparative experiments to analyze the influence of diverse initial values of the parameter $\widetilde{\lambda}$ on the model's final performance. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. This paper introduces a concept of symmetric self-attention, i.e., the query matrix is equal to the key matrix. The experimental results demonstrate an unexpectedly high performance from a Transformer predicated on symmetric self-attentions. This performance seems to contravene the conventional understanding that self-attention should capture the bi-directional correlation among tokens. Can the authors provide a logical elucidation to account for these particular experimental outcomes? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and valuable feedback. Below we address your concerns. **Q1**. The theoretical explication put forth in this paper applies accurately only to Transformers with single-head self-attention mechanisms. However, given that the prevailing Transformers utilize multi-head self-attention mechanisms and feed-forward networks, it becomes prudent to critically evaluate the efficacy of the resolution proposed herein. **Reply:** Thanks for your comments. The variational denoising framework we develop for the single-head self-attention mechanism can be extended to derive the multi-head self-attention mechanism. Please allow us to explain this derivation below. Let us consider one of the common implementations of the multi-head self-attention, in which the input sequence $X \in R^{N \times D_{x}}$ is truncated into $H$ pieces $X_d \in R^{N \times D_{x}/H}$, for $d = 1, ..., H$. Here, we assume that $D_{x}$ is divisible by $H$. In order to derive the multi-head attention, we apply our variational denoising framework proposed in Section 2 and Theorem 1 in our paper on each of these $X_d$ signals. Note that the vector $\bf{q}$$(i)$, $\bf{k}$$(i)$, and $\bf{v}$$(i)$, for $i = 1,...,N$, in Theorem 1 are replaced by the vectors $\bf{q}$$_d(i)$, $\bf{k}$$_d(i)$, and $\bf{v}$$_d(i)$ which are linear transformations of the input vector $X_d(i)$, $i = 1,...,N$. The feed-forward network then combines the output sequences of all of these $H$ heads. Given our derivation above, it follows that the oversmoothing issue still persists in transformers with multi-head self-attention, and the proposed NeuTRENO still helps mitigate it. Indeed, our experiments in the paper are conducted with multi-head self-attention and validate the advantage of NeuTRENO over the baseline multi-head self-attention in resolving oversmoothing. **Q2**. This paper lacks a comprehensive set of comparative experiments to analyze the influence of diverse initial values of the parameter on the model's final performance. **Reply:** Thank you for your suggestion. We have conducted an ablation study on the impact of the hyperparameter $\tilde\lambda$. In particular, on the ADE20K image segmentation task, we train NeuTRENO with different $\tilde\lambda$ values. We summarize our results in Table 13 in the attached PDF. Our findings reveals that within the range of [0.2, 1], NeuTRENO consistently outperforms the softmax baseline. However, when $\tilde\lambda$ values become small or big (below 0.2 or above 1, respectively), NeuTRENO's performance declines. **Q3**. This paper introduces a concept of symmetric self-attention, i.e., the query matrix is equal to the key matrix. The experimental results demonstrate an unexpectedly high performance from a Transformer predicated on symmetric self-attentions. This performance seems to contravene the conventional understanding that self-attention should capture the bi-directional correlation among tokens. Can the authors provide a logical elucidation to account for these particular experimental outcomes? **Reply:** Thank you for your question. First, we would like to clarify that transformer models can be bi-directional or uni-directional, depending on the task or model design. For instance, BERT [1], a bi-directional model, allows tokens to attend to both preceding and succeeding tokens. On the other hand, models like GPT [2], trained for autoregressive language modeling, is a uni-directional model as tokens can only attend to their previous counterparts. Second, regarding the symmetric attention, its competitive empirical performances have been demonstrated in existing works [3, 4]. **References** [1] Jacob Devlin, et al.. “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding”. NAACL, 2019. [2] Tom Brown, et al. "Language models are few-shot learners." NEURIPS, 2020. [3] Yao-Hung Hubert Tsai,et al. “Transformer Dissection: An Unified Understanding for Transformer’s Attention via the Lens of Kernel”. EMNLP-IJCNLP, 2019. [4] Wenlong Chen, et al. "Calibrating Transformers via Sparse Gaussian Processes." ICLR, 2022. ----- We hope we have cleared your concerns about our work. We would appreciate it if we could get your further feedback at your earliest convenience. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. I have no further questions about this paper. --- Rebuttal 2: Title: Thanks for your endorsement! Comment: Thanks for increasing your score, and we appreciate your endorsement.
Summary: This work shows that self-attention layers in transformers minimize a funcitonal which promotes smoothneess, thereby casuing token uniformity. The work also proposes a novel regularizer to preserve fidelity of the tokens. The work empirically shows that NeuTRENO outperforms baseline transformers in reducing the over-smoothing of token representation. Strengths: - I like the flow of the paper. The authors dive straight into the key issue of the work, without much retrospect on the previous known matters. - The paper rewrites the self-attention as a Gradient Descent Step to minimizie a nonlocal functional. - The paper shows that as k (number of layers) increases, the model is more likely to suffer from over-smoothing. - I think viewing each self-attention layer in the transformer as an impliciit gradient ascent in very interesting. It echos with some literature in the past, providing more insights on the interpretability of deep neural networks. Weaknesses: - Some use of the wording could be improved, for example in line 220s, the authors like to use words like "significantly", "addresses" etc. In my opinion, some of the experimental results are not strong enough to support author's claim. It could create misleading for the audiences. - Experiments looks very interesting. I think the paper might benefit more by discussing broader impact of "alleviating over-smoothness". Say for example. would be help in OOD setting? incremental setting? or other interesting applications. In my opinion, 1% increase on standard metric should not be the best selling point for this interesting work. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Although the authors motivates this work from the point of over-smoothing, the paper lacks discussion on the actual cause, impact and consequence of over-smoothing. For example, from Figure.1, we observe that NeuTRENO has a smaller cosine similarity comparing to naive DeiT. One question would be, what should be the desired cosine sim? Is smaller cosine sim siginificantly better than say sim over 0.8? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Please see the previous two sections Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and valuable feedback. Below we address your concerns. **Q1**. Some use of the wording could be improved, for example in line 220s, the authors like to use words like "significantly", "addresses" etc. In my opinion, some of the experimental results are not strong enough to support author's claim. It could create misleading for the audiences. **Reply:** Thank you for your feedback. We have removed those words in our revised manuscript. We have also included the standard deviations from 5 runs for each experiment (in the main text) in Tables 9, 10, and 11 in the attached PDF. The standard deviations for both the NeuTRENO and baseline models in these tasks are small compared to the gains achieved by the NeuTRENO method. This observation suggests that NeuTRENO’s improvements over the baseline are significant and not by chance. **Q2**. Experiments looks very interesting. I think the paper might benefit more by discussing broader impact of "alleviating over-smoothness". Say for example. would be help in OOD setting? incremental setting? or other interesting applications. In my opinion, 1% increase on standard metric should not be the best selling point for this interesting work. **Reply:** Thank you for your valuable suggestions. We have evaluated the robustness of our NeuTRENO model compared to the baseline transformer model, particularly under adversarial examples and for out-of-distribution generalization. Table 12 in the attached PDF demonstrates that NeuTRENO DeiT-Tiny is consistently more robust than the DeiT-Tiny baseline on the Imagenet-C ( corruption and perturbations data, such as adding noise and blurring the images) [1], Imagenet-A (adversarial examples) [2], and Imagenet-R (for out of distribution generalization) [3], which are widely used to test the model’s robustness. Furthermore, in an incremental learning setting [4], our 8-layer NeuTRENO achieves 1.97% higher accuracy on the sentiment classification task [5] than the 8-layer baseline transformer. We hope that these additional analyses and results address your concerns and provide further evidence for the stability, significance, and robustness of our proposed NeuTRENO approach. **References** [1] Hendrycks, Dan, et al. "Benchmarking neural network robustness to common corruptions and perturbations." arXiv, 2019. [2] Hendrycks, Dan, et al. "Natural adversarial examples." CVPR, 2021. [3] Hendrycks, Dan, et al. "The many faces of robustness: A critical analysis of out-of-distribution generalization." ICCV, 2021. [4] Kahardipraja, et al. "Towards incremental transformers: An empirical analysis of transformer models for incremental NLU." EMNLP, 2021. [5] Dimitrios Kotzias, et al. "From group to individual labels using deep features". KDD, 2015 **Q3**. Although the authors motivates this work from the point of over-smoothing, the paper lacks discussion on the actual cause, impact and consequence of over-smoothing. For example, from Figure.1, we observe that NeuTRENO has a smaller cosine similarity comparing to naive DeiT. One question would be, what should be the desired cosine sim? Is smaller cosine sim siginificantly better than say sim over 0.8? **Reply:** Thanks for your comments. We believe there is a misunderstanding of the contributions of our paper. Please allow us to clear this misunderstanding by clarifying that our work first focuses on developing a variational denoising framework to understand the self-attention of transformers as a gradient descent approximation of a functional. Using this new finding, we provide an explanation for the oversmoothing issue in transformers as a result of self-attention minimizing a functional, leading to the smoothing effect on the input sequence, analogous to a diffusion process (see Remark 1 in our main text). Then, we rigorously prove this observation on the existence of oversmoothing in transformers using a random walk analysis in Section 2.2. Thus, in our paper, we not only discuss the actual cause of oversmoothing but also prove its existence theoretically. Regarding the impact and consequence of oversmoothing, oversmoothing leads to a loss of diversity among token representations, hindering the model's capacity to capture diverse features. As a result, the model's performance can be adversely affected. Additionally, oversmoothing often constrains the ability of transformer models to be scaled up in depth. For instance, more-layer transformer models might underperform compared to those with fewer layers due to this issue [1]. The cosine similarity between token representations of each layer is an indicator of the oversmoothing issue. It shows how similar a token is to the other, on average, at a layer. When cosine similarity at a layer is high, it indicates that the representations of tokens are too similar, diminishing the expressive power of the model. Consequently, smaller cosine similarity scores are preferable. **References** [1] Daquan Zhou, et al. “Deepvit: Towards deeper vision transformer”. arXiv, 2021. ----- We hope we have cleared your concerns about our work. We would appreciate it if we could get your further feedback at your earliest convenience. --- Rebuttal Comment 1.1: Comment: I thank the author for providing this rebuttal. I agree that it is a very interesting work that tried to understand the self-attention of transformers as a gradient descent step. I am still a bit confused by the over-smoothness sections. For example, in figure.7 of the rebuttal pdf, the proposed new architecture seem to successfully decrease the cosine sim between tokens. But does that lead to better empirical performance or better interpretability? On vision tasks, I am still dubious about the statement that over-smooth is indeed a issue and should be addressed. Of course, I think the work is very valuable in terms of trying to address this problem if this problem is indeed very significant. --- Reply to Comment 1.1.1: Title: Response to Reviewer qcqa: Oversmoothing and Empirical Performance Comment: Thanks for your further feedback. Please allow us to address your concerns below. We will include the following discussion and existing works in our revised manuscript to clarify the impact of oversmoothing on the model’s performance, especially for the vision transformer (ViT). **Question: For example, in figure.7 of the rebuttal pdf, the proposed new architecture seems to successfully decrease the cosine sim between tokens. But does that lead to better empirical performance or better interpretability?** **Reply:** In Figure 7 (Left) of the rebuttal PDF, our NeuTRENO BERT finetuned on the SQUAD question answering task [1] yields better accuracy than the baseline BERT finetuned on the same task (81.39 exact match score and 88.62 F1 score vs. 80.77 exact match score and 88.12 F1 score). This result indicates that reducing cosine similarity between tokens in the trained transformer-based models leads to better empirical performance. **Question: On vision tasks, I am still dubious about the statement that over-smooth is indeed an issue and should be addressed.** **Reply:** The oversmoothing in ViT has been verified and investigated in existing works. In particular, [2] observes that the performance of ViT quickly saturates as more layers are added to the model. Moreover, experiments in [2] show that the 32-layer ViT underperforms the 24-layer ViT, indicating the difficulty of ViTs in gaining benefits from deeper architectures. The authors point out that oversmoothing results in this phenomenon by causing the token representations to become identical when the model grows deeper. Based on this observation, they propose a cross-head communication method that helps enhance the diversity of both token representations and attention matrices. Furthermore, it has been shown in [3] that the training of ViT models encounters instability with greater depths. [4] proposes that this instability arises from the oversmoothing, where token representation for patches within an image become progressively alike as the model's depth increases. In an effort to explain this issue, [5] finds out that self-attention acts as a low-pass filter, which smoothens the token representations in ViTs. This leads to the proposal of the FeatScale method [5], which regulates feature frequencies, whether low or high, to counteract the consequences of oversmoothing. Different from existing works, in our paper, we theoretically prove the oversmoothing phenomenon via the variational denoising framework that we develop. Our proposed NeuTRENO method helps mitigate the oversmoothing issue and improves the performance of the ViT baselines (DeiTs), as well as other baseline transformer-based models. We also empirically demonstrate that NeuTRENO not only reduces the cosine similarity between token representation (See Figure 1 in the main text and Figure 7) but also the redundancy in attention maps between layers (See Figure 5 in Appendix G2). Finally, we show that NeuTRENO is complementary to existing methods, including FeatScale [5] (See Table 1 and Table 7 in our paper). **References** [1] Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. “SQuAD: 100,000+ questions for machine comprehension of text”. EMNLP, 2016. [2] Daquan Zhou, et al. “Deepvit: Towards deeper vision transformer”. arXiv, 2021. [3] Touvron Hugo, et al. "Going deeper with image transformers." ICCV, 2021. [4] Chengyue Gong, et al. “Vision transformers with patch diversification”. arXiv, 2021. [5] Wang Peihao, et al. "Anti-oversmoothing in deep vision transformers via the fourier domain analysis: From theory to practice." ICLR, 2022.
Summary: This paper analyses the over-smoothing problem of transformer architecture by showing that self-attention layers minimize a functional that causes over-smooth. To address this problem, the authors introduce a regularizer that penalizes the norm of the difference between the smooth output tokens and input tokens. In the experimental section, the authors demonstrate that their proposed solution, NeuTRENO, outperforms baseline models on visual and language-related downstream tasks. Strengths: The issue of over-smoothing in transformers is a fascinating topic. While previous research has explored this problem, this paper offers a unique perspective by examining self-attention as a gradient descent step from a variational standpoint. This novel approach sheds new light on the over-smoothing problem and contributes to a deeper understanding of the issue within the research community. The effectiveness of the proposed solution, NeoTRENO, in mitigating the problem of over-smoothing is demonstrated both theoretically and empirically. The experiments show NeoTRENO's effectiveness in both vision and language transformers. Weaknesses: Current experiments are mostly conducted on small transformer models( e.g., DeiT-tiny). Given that the transformer architecture is crucial for large pre-trained language models, it remains unclear whether the proposed solution can effectively alleviate the over-smoothing problem when the model size is increased, and whether can be combined with large pre-trained language models (e.g., BERT). Technical Quality: 3 good Clarity: 3 good Questions for Authors: please refer to weaknesses Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The generalizability and scalability of the proposed solution to larger transformer models have not been investigated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and valuable feedback. Below we address your concerns. **Q1.** Current experiments are mostly conducted on small transformer models( e.g., DeiT-tiny). Given that the transformer architecture is crucial for large pre-trained language models, it remains unclear whether the proposed solution can effectively alleviate the over-smoothing problem when the model size is increased, and whether can be combined with large pre-trained language models (e.g., BERT). **Reply:** Thanks for your comments. The DeiT-tiny model used in our experiments has 5M parameters and is trained on the large-scale Imagenet dataset for classification task. To examine the scalability of our NeuTRENO model, we have applied NeuTRENO to the DeiT-small, which is 4 times larger than the DeiT-tiny and has 22M parameters. We summarize the results in Table 7 in Appendix F.1. The improvements achieved by our NeuTRENO DeiT-small over the baseline model demonstrate the scalability of NeuTRENO when applied to very large transformer architectures. Furthermore, we have conducted additional experiments to show that our NeuTRENO method can effectively mitigate the oversmoothing issue in the BERT-base model. In particular, in Figure 7 (Left) in the attached PDF, we plot the cosine similarity between token representations across layers of a pre-trained BERT-base model [1] on the SQuAD v1.1 question answering task [2] and observe the presence of the oversmoothing issue as the model gets deeper, causing tokens to become identical. We then apply NeuTRENO on the same pre-trained BERT model, and without any fine-tuning, we observe a significant reduction in the cosine similarity between token embeddings in each layer (see Figure 7 (Left) in the attached PDF), indicating that NeuTRENO effectively mitigates the oversmoothing problem in BERT. Moreover, we have conducted the same analysis for a randomized BERT-base model and a randomized NeuTRENO BERT-base model and obtained the same encouraging results (see Figure 7 (Middle) in the attached PDF). These results further suggest that NeuTRENO helps alleviate the oversmoothing issue in large-scale transformer models. **References** [1] Jacob Devlin, et al. “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding”. NAACL, 2019. [2] Pranav Rajpurkar, et al. “SQuAD: 100,000+ Questions for Machine Comprehension of Text”. EMNLP, 2016. ----- We hope we have cleared your concerns about our work. We would appreciate it if we could get your further feedback at your earliest convenience. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response and clarifiaction. I keep my initial rating. --- Reply to Comment 1.1.1: Title: Thanks for your endorsement! Comment: Thanks for your response and we appreciate your endorsement.
Summary: This paper studies the oversmoothing problem in transformers. Roughly speaking, it was observed that embeddings start to converge when the network is deeper. The authors built a model to explain this phenomenon. Roughly speaking, they relate having deeper architecture with making progress towards minimizing a function. Then they built some regularizers out of this intuition. Strengths: I think these analysis and regularization tools are useful addition to the exploding area of transformers. Weaknesses: I feel I was not able to properly parse some math texdt. My most unsure part is whether the amount of hand-waiving is too much even under today’s standard. For example, the authors feel very comfortable in exchanging integration operator and summation. It is true in general the approximation errors are usually inconsequential. But it is still different from pretending that they are the same in theorem statements/proofs. Also, I dont understand how K(x, y), k(x, y) and boldk(x) and boldk(y) are related. First k(x, y) are defined, then there will always exist boldk? It also looks that sometimes k/K are treated as kernels whereas sometimes it is treated as a kernel function. Probability functions and kernel functions have different properties? Technical Quality: 3 good Clarity: 3 good Questions for Authors: It was proven that an update of a self-attention is equivalent to taking a gradient over a function. But will the function have many local optimal so that even we have the main theorem, it does not quite say all the embeddings will eventually be the same? Or maybe the function is a convex function. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and valuable feedback. Below we address your concerns. **Q1**. I feel I was not able to properly parse some math texdt. My most unsure part is whether the amount of hand-waiving is too much even under today’s standard. For example, the authors feel very comfortable in exchanging integration operator and summation. It is true in general the approximation errors are usually inconsequential. But it is still different from pretending that they are the same in theorem statements/proofs. **Reply:** In the main text, we employ the Monte Carlo approximation method to estimate integrals in Eqns. 11, 15, 23, and 27. We choose the Monte Carlo integration method because it is an unbiased estimator. Additionally, when the sample size $N$ used in Monte Carlo is sufficiently large, the estimation converges to the correct value. This tends to hold true for transformer models, which are often applied on long sequences with many token samples. **Q2**. Also, I don't understand how K(x, y), k(x, y) and boldk(x) and boldk(y) are related. First k(x, y) are defined, then there will always exist boldk? It also looks that sometimes k/K are treated as kernels whereas sometimes it is treated as a kernel function. Probability functions and kernel functions have different properties? **Reply:** The function $k(x, y)$ captures the similarity between signal values at positions x and y. For example, $k(x, y)$ can be a radial basis function (RBF) kernel $e^{-\gamma||\Phi(x) - \Phi(y)||^2}$ for a feature function $\Phi$ [1]. $K(x, y)$ is indeed just a shorthand notation for $k(x, y) + k(y, x)$. We use this notation to simplify the expression. $\bf{k}$$(\cdot)$ is a vector-valued function which maps positions ($x$ or $y$) to feature vectors. In transformer models, ${\bf k}(i)$, $i=1,\dots,N$, is the key vector of token $i$-th in self-attention. For images, $\bf{k}$$(x)$ indicates the feature vector at pixel $x$. **References** [1] Bishop, Christopher M. "Pattern Recognition and Machine Learning". New York. Springer, 2006. **Q3. It was proven that an update of a self-attention is equivalent to taking a gradient over a function. But will the function have many local optimal so that even we have the main theorem, it does not quite say all the embeddings will eventually be the same? Or maybe the function is a convex function.** **Reply:** Yes, the functional $E$ is a convex function of $\bf{u}$ since $J(\bf{u})$ is a sum of quadratic functionals of $u_j$ for $j = 1,\dots, D$, and $G(\bf{u}, \bf{f})$ is a quadratic functional of $\bf{u}$. Hence, $E(\bf{u})$, the sum of two convex functional, is itself a convex functional. ----- We hope we have cleared your concerns about our work. We would appreciate it if we could get your further feedback at your earliest convenience. --- Rebuttal Comment 1.1: Title: Thank You Comment: Dear Reviewer YNWk, Thanks for your reviews of our paper. Since the discussion period between the authors and the reviewers was already over and we have not heard from you during this period, we would be grateful if the reviewer could let us know if all your questions are addressed to some extent. If you are satisfied with our answers, we hope that the reviewer will consider adjusting your score. Best regards, Authors
Rebuttal 1: Rebuttal: Dear AC and reviewers, Thanks for your thoughtful reviews and valuable comments, which have helped us improve the paper significantly. We are encouraged by the endorsements that: 1) Our paper's variational denoising framework for self-attention is novel and interesting (Reviewer qvvE, h3ak, qcqa); 2) The derivation of the oversmoothing in transformers from our variational denoising framework is useful (Reviewer YNWk) and contributes to the understanding of the (oversmoothing) issue (Reviewer h3ak); 3) The theory in our paper is rigorous, in-depth (Reviewer k2Mg), clear, and detailed (Reviewer qvvE); 4) The effectiveness of our NeuTRENO model is demonstrated both theoretically and empirically (Reviewer h3ak). Among the concerns of the reviewers is the significance of our NeuTRENO's advantages over the baseline transformer model. We address this concern here. 1. We provide the standard deviations from 5 runs for each experiment (in the main text) in Tables 9, 10, and 11 in the attached PDF. The standard deviations for both the NeuTRENO and baseline models in these tasks are small compared to the gains achieved by the NeuTRENO method. This observation suggests that NeuTRENO’s improvements over the baseline are significant and not by chance. 2. In addition to the standard metrics, we have evaluated the robustness of our NeuTRENO model compared to the baseline transformer model, particularly under adversarial examples and for out-of-distribution generalization. Table 12 in the attached PDF demonstrates that NeuTRENO DeiT-Tiny is consistently more robust than the DeiT-Tiny baseline on the Imagenet-C (common data corruption and perturbations, such as adding noise and blurring the images) [1], Imagenet-A (adversarial examples) [2], and Imagenet-R (out of distribution generalization) [3] datasets, which are widely used to test the model’s robustness. 3. Furthermore, in an incremental learning setting [4], our 8-layer NeuTRENO achieves 1.97% higher accuracy on the sentiment classification task [5] than the 8-layer baseline transformer. 4. To demonstrate the scalability of our proposed model, we have conducted additional experiments to show that our NeuTRENO method can effectively mitigate the oversmoothing issue in the BERT-base model. In particular, in Figure 7 (Left) in the attached PDF, we plot the cosine similarity between token representations across layers of a pre-trained BERT-base model [6] on the SQuAD v1.1 question answering task [7] and observe the presence of the oversmoothing issue as the model gets deeper, causing tokens to become identical. We then apply NeuTRENO on the same pre-trained BERT model, and without any fine-tuning, we observe a significant reduction in the cosine similarity between token embeddings in each layer (see Figure 7 (Left) in the attached PDF), indicating that NeuTRENO effectively mitigates the oversmoothing problem in BERT. Moreover, we have conducted the same analysis for a randomized BERT-base model and a randomized NeuTRENO BERT-base model and obtained the same encouraging results (see Figure 7 (Middle) in the attached PDF). These results further suggest that NeuTRENO helps alleviate the oversmoothing issue in large-scale transformer models. We would also like to summarize the main contributions of our paper here. Our work first focuses on **developing a variational denoising framework to understand the self-attention of transformers as a gradient descent approximation of a functional**. Using this new finding, we **provide an explanation for the oversmoothing issue in transformers** as a result of self-attention minimizing a functional, leading to the smoothing effect on the input sequence, analogous to a diffusion process (see Remark 1 in our main text). Then, we **rigorously prove this observation on the existence of oversmoothing in transformers using a random walk analysis** in Section 2.2. Thus, in our paper, we not only discuss the cause of oversmoothing but also prove its existence theoretically. Finally, we **propose the Neural Transformer with a Regularized Nonlocal Functional (NeuTRENO), a novel class of transformers designed to mitigate oversmoothing**. NeuTRENO is derived by optimizing a regularized nonlocal functional, which includes an additional convex fidelity term. This fidelity term penalizes the norm of the difference between the smooth output tokens from self-attention and the input tokens, thereby reducing the over-smoothing effect. It is important to note that in our analysis, we primarily aim at explaining and resolving the collapse of representations, i.e., oversmoothing, at the token level (tokens representations become identical) rather than at the sentence level (the models give high similarity scores for semantically different sentences). **References** [1] Hendrycks, Dan, et al. "Benchmarking neural network robustness to common corruptions and perturbations." arXiv, 2019. [2] Hendrycks, Dan, et al. "Natural adversarial examples." CVPR, 2021. [3] Hendrycks, Dan, et al. "The many faces of robustness: A critical analysis of out-of-distribution generalization." ICCV, 2021. [4] Kahardipraja, et al. "Towards incremental transformers: An empirical analysis of transformer models for incremental NLU." EMNLP, 2021. [5] Dimitrios Kotzias, et al. "From group to individual labels using deep features". KDD, 2015 [6] Jacob Devlin, et. al. “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding”. NAACL, 2019. [7] Pranav Rajpurkar, et al. “SQuAD: 100,000+ Questions for Machine Comprehension of Text”. EMNLP, 2016. ----- We are glad to answer any further questions you have on our submission. Pdf: /pdf/c70e19ae0c1633b62c9126c79e0d351b3f45ab0a.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper analyzes the reason of over-smoothing in Transformer based on Self-attention as a Gradient Descent Step to Minimize a Nonlocal Function and random walk analysis. The paper then proposes a novel regularizer that penalizes the norm of the difference between the output tokens from self-attention and the input tokens to preserve the fidelity of the tokens. Experimental results on ImageNet classification, image segmentation and LM tasks show that the proposed approach achieves better performance than the baseline vanilla Transformer. Strengths: Strengths: (1) The paper provides interesting aspects to analyze the reason of over-smoothing in Transformers based on Self-attention as a Gradient Descent Step to Minimize a Nonlocal Function and random walk analysis. (2) The theoretical proofs are detailed and clear. Weaknesses: Weaknesses: (1) Although the paper provides detailed theoretical proofs of the nonlocal variational denoising framework for self-attention and provides an explanation for the over-smoothing issue in transformer-based models, the empirical evaluations missed some important details and analyses. In Table 1, the configuration of NeuTRENO Adaptation is not explained. (2) Except the results in Table 2 on image segmentation, the gains from the proposed approach in Table 1 ImageNet classification and WikiText-103 LM are all quite small. In Table 1, the gains from the proposed approach over the baseline are 0.84 on Top-1 acc and 0.54 on Top-5 acc. In Table 3, the gains from the proposed approach over baseline are 0.55 on validation PPL and 0.59 on test PPL. It is not clear how stable the proposed model is on these datasets, since there is no reporting of standard deviations from multiple runs with different random seeds, so it is not clear whether the gains are by chance. Also, it is not clear whether these gains are statistically significant. (3) There have been prior works analyzing the uniformity and alignment problem of BERT on sentence representations and proposed postprocessing solutions, for example, the flow https://arxiv.org/pdf/2011.05864.pdf and BERT whitening method https://arxiv.org/abs/2103.15316 and supervised method such as SimCSE https://arxiv.org/abs/2104.08821 and enhancement over SimCSE on sentence representations. These works are not compared to or cited in the paper. (4) An important point is missing. Based on the results from models addressing uniformity in BERT and improving sentence representations, these models alleviate uniformity but cause degradations in transfer learning capability, e.g., SimCSE. This issue has not been discussed in the paper since the paper did not investigate the proposed approach in pre-training and fine-tuning paradigm, to investigate the impact of the proposed approach on transferability. (5) There are some presentation problems. Some math symbols and functions are not defined or clearly defined, as listed under Questions. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please address the points under weaknesses. Also, there are some presentation errors in the paper. For example, (1) The word “functional” is an adjective, so using it as a noun in the paper is ungrammatical, for example, in the title and phrases such as “minimize a functional which…” “Minimizing the resulting regularized energy functional, …” throughout the paper. (2) Some math symbols have not been defined, for example, D_x , D_{qk} in Section 1.1. (3) Line 89 defines the weights k(x,y), but the definition is kind of vague. Line 117 provides a form of K(x,y) :=k(x,y)+k(y,x), which needs to be explained clearer about this choice of K(x,y). Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: This is N/A for this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and valuable feedback. Below we address your concerns. **Q1**. The empirical evaluations missed some important details and analyses. In Table 1, the configuration of NeuTRENO Adaptation is not explained. **Reply:** Thank you for your comment. In our NeuTRENO Adaptation experiment, we integrate the NeuTRENO architecture into a pre-trained DeiT-Tiny model with $\tilde\lambda = 0.6$ (determines how much the regularization impacts the solution, a small value makes the solution smoother and vice versa). This combined model is then finetuned for 100 additional epochs. The experiment and its configuration are detailed from lines 250 to 254 (main text) and lines 520 to 523 (Appendix A). For comparisons with related work (e.g., FeatScale method), please refer to lines 234 to 236 (main text). For evaluation metrics, we provide thorough descriptions in Appendix A, under "Datasets and Metrics" for each task. As for the analysis, we provide numerous empirical analyses of our models in Section 5 (main text) and in Appendix G. In particular, Fig. 1 shows that the cosine similarity between token representations across layers in a trained NeuTRENO is significantly reduced compared to the baseline model, demonstrating its advantage in mitigating oversmoothing. Furthermore, Fig. 6 presents similar results for a randomly initialized NeuTRENO, i.e., before training. Additionally, Fig. 5 (Appendix G2) demonstrates NeuTRENO's advantage in reducing head redundancy compared to the baseline. To address the efficiency of NeuTRENO, an efficiency analysis is provided in Appendix G4. Finally, to verify our theoretical results in Theorem 1, in Figure 3, we show that softmax attention indeed minimizes the functional $J(\bf{u})$ in Eqn. 6 in our paper. **Q2**. Question on the stability and significance of NeuTRENO's performance. **Reply:** Thank you for your question. Please kindly refer to points 1, 2, and 3 in the general response for our answers to your questions. **Q3**. Reviewer's concerns that prior work on the uniformity and alignment problem of BERT on sentence representations are overlooked. **Reply:** Thanks for bringing up these prior works. However, we respectfully disagree that our paper disregards these works. We would like to emphasize that our work focuses on developing a variational denoising framework to understand the self-attention of transformers as a gradient descent approximation of a functional. From this new finding, we explain the oversmoothing issue of transformers as a result of self-attention minimizing a functional. **It is important to note that we primarily aim at explaining and resolving the collapse of representations, i.e., oversmoothing, at the token level (tokens representations become identical) rather than at the sentence level (the model gives high similarity scores for semantically different sentences)**. Although the merits of our method can be generalized to enhance sentence-level embeddings, it is out of the scope of our work, and we will explore it in future work. In addition, our new empirical study shows that SimCSE, proposed to enhance sentence representations via contrastive learning, still suffers from oversmoothing at the token level. Fig. 7 in the attached PDF shows that the cosine similarity between token representations for each layer of the SimCSE, which is trained on the STS-12 dataset, increases as the model depth grows. In contrast, our NeuTRENO is shown to mitigate the issue. When integrating NeuTRENO with the pre-trained SimCSE, without any fine-tuning, we observe the cosine similarity scores reduce significantly (Fig. 7) **References** [1] Gao, T., Yao, X., and Chen, D. "SimCSE: Simple Contrastive Learning of Sentence Embeddings". EMNLP, 2021. **Q4**. The paper did not investigate the transferability of the proposed approach. **Reply:** In our paper, we empirically study the transferability of NeuTRENO and summarize the results in Table 2 (Section 4). In particular, after pretraining DeiT and NeuTRENO DeiT on the ImageNet classification task, we finetune both models for the ADE20k image segmentation task. The results demonstrate that the finetuned NeuTRENO DeiT yields significantly better performance than the finetuned DeiT. Further details regarding this transfer learning experiment are provided in Section 4, from lines 238 to 244, and also in Appendix A2. Note that we provide the accuracies of the pre-trained models on the ImageNet classification task in Table 1. **Q5**. Some math symbols have not been or are vaguely defined, for example, D_x , D_{qk}, k(x,y), and K(x, y). **Reply:** Thank you for your comment. In our paper, $D_{x}$ represents the feature dimension of the token $x_{i}$, $i=1,\dots,N$, i.e. $X \in \mathbb{R}^{N \times D_{x}}$. $D_{qk}$ represents the feature dimension of $q_{i}$ and $k_{i}$, $i=1,\dots,N$, i.e., $Q, K \in \mathbb{R}^{N \times D_{qk}}$. The function $k(x, y)$ captures the similarity between signal values at positions x and y. For example, $k(x, y)$ can be a radial basis function (RBF) kernel $e^{-\gamma||\Phi(x) - \Phi(y)||^2}$ for a feature function $\Phi$ [1]. $K(x, y)$ is indeed just a shorthand notation for $k(x, y) + k(y, x)$. We use this notation to simplify the expression. **References** [1] Bishop, Christopher M. "Pattern Recognition and Machine Learning". New York. Springer, 2006. **Q6**. The word “functional” is an adjective, so using it as a noun in the paper is ungrammatical. **Reply:** In the context of variational calculus, a functional is a mathematical concept that maps from a function space to the real numbers. A more formal definition can be found in [1]. **References** [1] A. N. Kolmogorov and S. V. Fomin, “Elements of the Theory of Functions and Functional Analysis”, Graylock Press, 1957. ----- We hope we have cleared your concerns about our work. We would appreciate it if we could get your further feedback at your earliest convenience. --- Rebuttal Comment 1.1: Title: Thank You Comment: Dear Reviewer qvvE, Thanks for your reviews of our paper. Since the discussion period between the authors and the reviewers was already over and we have not heard from you during this period, we would be grateful if the reviewer could let us know if all your questions are addressed to some extent. If you are satisfied with our answers, we hope that the reviewer will consider adjusting your score. Best regards, Authors
null
null
null
null
null
null
SoTTA: Robust Test-Time Adaptation on Noisy Data Streams
Accept (poster)
Summary: The authors point out that model may suffer from non-interest samples while TTA. Existing TTA methods are not robust to these samples. To address these issues, the authors proposed a methods called SoTTA with two key components, input-wise robustness via high-confidence uniform-class sampling and parameter-wise robustness via entropy-sharpness minimization. Strengths: The authors focus on TTA in the wild and point out that TTA with non-interest would lead to performance degradation. Weaknesses: The proposed method seems to be not novel. A) The proposed “Input-wise robustness via high-confidence uniform-class sampling” method excludes low-confidence samples in TTA. Such samples filtration strategies have been explored in ETA [Efficient Test-Time Model Adaptation without Forgetting]. B) The proposed “Parameter-wise robustness via entropy-sharpness minimization” method introduces entropy-sharpness minimization, which have been explored in SAR [Towards Stable Test-time Adaptation in Dynamic Wild World]. It would be better for the authors to clarify the differences between the proposed methods and the TTA methods I mentioned above. In the experimental results, I found the proposed always outperforms SAR. But in my understanding, the main components of these two methods are almost the same. Both of them include low-confidence sample filtration strategy and entropy-sharpness minimization. So I have no idea why the proposed method can yield much better results. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See comments above. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: See comments above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your time and effort in providing us with positive comments. We respond to your question in what follows. Please also refer to the *global response* we have posted together. --- **Weakness 1. It would be better for the authors to clarify the differences between the proposed methods and the TTA methods I mentioned above.** While SoTTA (ours), EATA, and SAR all leverage sample filtering strategy, the key distinction of input-wise robustness of SoTTA and EATA/SAR lies in two aspects: (1) characteristics of the samples filtered out and (2) memory management strategies. First, our high-confidence sampling strategy in SoTTA aims to filter non-interest samples by utilizing only the samples with high confidence, while EATA and SAR use the same algorithm that excludes a few high-entropy samples, particularly during the early adaptation stage. As evidence, we show that our method excludes 99.98% of the non-interest samples, whereas EATA and SAR exclude 33.55% of such samples. Second, while EATA and SAR adapt to every incoming low-entropy sample, SoTTA leverages a uniform-class sampling approach to prevent overfitting. As shown in Figure 5, non-interest samples often lead to imbalanced class predictions, and these skewed distributions could lead to an undesirable bias in p(y) and thus might negatively impact TTA objectives, such as entropy minimization. The ablation study in Table 2 of our paper shows the effectiveness of uniform sampling with a 9.8% accuracy improvement. We acknowledge that both SoTTA and SAR utilize sharpness-aware minimization proposed by Foret et al. [r1]. However, we clarify that the motivation behind using SAM is different. While SAR intends to avoid model collapse when exposed to samples with large gradients, we aim to enhance the model's robustness to non-interest samples with high confidence scores. As illustrated in Figure 6 in the manuscript, we observed that entropy-sharpness minimization effectively prevents the model from overfitting to non-interest samples. In conclusion, while our algorithm results in marginal performance degradation in noisy settings (82.9% -> 81.0% for Noise), EATA and SAR show significant degradation: EATA 82.4% -> 36.6% for Noise and SAR 78.3% -> 58.3% for Noise. The detailed results of SoTTA and SAR are included in the manuscript, and the result of EATA is included in the global response #1. **Weakness 2. In the experimental results, I found the proposed always outperforms SAR. But in my understanding, the main components of these two methods are almost the same. Both of them include low-confidence sample filtration strategy and entropy-sharpness minimization. So I have no idea why the proposed method can yield much better results.** We summarize the key difference between SoTTA and SAR in the comments above. To conclude, It is precisely due to the robustness of our method against non-interest samples that SoTTA outperforms SAR. By reducing the influence of non-interest samples and considering class imbalance through uniform-class sampling, our approach mitigates the risk of model corruption caused by biased learning from a few non-interest samples. It results in better overall performance, as demonstrated in our experimental results. We thank the reviewer for pointing out this, and we will revise our manuscript to clarify the difference between SoTTA and SAR. [r1] Foret, Pierre, et al. "Sharpness-aware Minimization for Efficiently Improving Generalization." International Conference on Learning Representations. 2020. --- Rebuttal Comment 1.1: Title: Further Comments Comment: Thank you for the authors' response. The authors highlight two main distinctions: *(1) characteristics of the samples filtered out and (2) memory management strategies.* However, in their detailed response, I struggled to grasp the exact disparity in memory management. Moreover, I'm seeking a clear understanding of the technical contrasts between the proposed method and ETA/SAR. For instance, while ETA performs operation "A", your SoTTA performs operation "B". In your scenario, "B" proves more effective than "A," leading to improved accuracy. The differences presented in the current version of the response are somewhat general. I recognize that your method's motivation varies from ETA/SAR, yet I'm specifically interested in perceiving technical distinctions. --- Reply to Comment 1.1.1: Title: Thank you for your response Comment: We appreciate your response to our rebuttal. We first clarify the main technical distinctions between SoTTA (ours) and EATA/SAR regarding sample management. EATA and SAR employ the same strategy of "excluding high-entropy samples" as follows: --- Input: test data stream $\mathbf{x}_t$, memory $M$ with capacity $N$, entropy threshold $E_0$ for test time $t \in \\{1, \cdots, T\\}$ do &nbsp;&nbsp; if $E(\mathbf{x}_t; \theta) < E_0$ then &nbsp;&nbsp;&nbsp;&nbsp; Add $(\mathbf{x}_t, \hat{y_t})$ to $M$ &nbsp;&nbsp; if $t$ % batch_size == $0$ then &nbsp;&nbsp;&nbsp;&nbsp; Update model $\theta$ with $M$ &nbsp;&nbsp;&nbsp;&nbsp; Set $M$ = $\emptyset$ &nbsp;&nbsp;&nbsp;*# re-collect data from scratch* --- Our memory management scheme is as follows: --- Input: test data stream $\mathbf{x}_t$, memory $M$ with capacity $N$, confidence threshold $C_0$ for test time $t \in \\{1, \cdots, T\\}$ do &nbsp;&nbsp; if $C(\mathbf{x}_t; \theta) > C_0$ then &nbsp;&nbsp;&nbsp;&nbsp; if $|M| < N$ then &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Add $(\mathbf{x}_t, \hat{y_t})$ to $M$ &nbsp;&nbsp;&nbsp;&nbsp; else &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; $\mathcal{Y}^* \gets$ the most prevalent class(es) in $M$ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if $\hat{y}_t \notin \mathcal{Y}^*$ then &nbsp;&nbsp;&nbsp;*# balancing classes* &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Randomly discard $(\mathbf{x}_i, \hat{y}_i)$ from $M$ where $\hat{y}_i \in \mathcal{Y}^*$ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; else &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Randomly discard $(\mathbf{x}_i, \hat{y}_i)$ from $M$ where $\hat{y}_i = \hat{y}_t$ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Add $(\mathbf{x}_t, \hat{y}_t)$ to $M$ &nbsp;&nbsp; if $t$ % batch_size == $0$ then &nbsp;&nbsp;&nbsp;&nbsp; Update model $\theta$ with $M$ --- SoTTA’s two technical distinctions regarding memory management strategy are (1) **uniform-class memory management** and (2) **continual memory management**. First, EATA and SAR filter out low-entropy data without considering their distribution, which suffers from skewed predicted distributions of non-interest samples. This has a detrimental effect on TTA objectives, such as entropy minimization. Our proposed **uniform-class memory management (Uniform-class)** addresses this issue, resulting in improved accuracy. Second, EATA and SAR gather only low-entropy samples until a batch-sized number of test samples pass by. These gathered samples are then utilized for adaptation. Subsequently, EATA and SAR reset the memory buffer and restart the sample collection process for each adaptation. This strategy is susceptible to overfitting due to a smaller number of samples used for adaptation and the temporal distribution drift of the samples. In contrast, our **continual memory management (Continual)** approach effectively mitigates this issue by retaining high-confidence uniform-class samples in the memory. We also provide the results of an ablation study that demonstrates the effectiveness of each technique: | | Benign | Near | Far | Attack | Noise | Avg | |---|:---:|:---:|:---:|:---:|:---:|:---:| | SoTTA (w\o Uniform-class) | 83.1±0.5 | 76.7±4.6 | 66.7±5.0 | 83.9±0.8 | 52.3±19.2 | 72.5 | | SoTTA (w\o Continual) | 81.0±0.5 | 79.5±0.3 | 75.5±1.8 | 84.4±0.2 | 65.7±7.0 | 77.2 | | SoTTA | 82.9±0.4 | 81.4±0.5 | 81.6±0.5 | 84.5±0.3 | 81.0±1.5 | 82.3 | Thank you again for your valuable feedback and suggestions. We will carefully incorporate the discussions and results in our final manuscript. Please don’t hesitate to leave additional comments if you have any follow-up questions or discussions. Best, Authors
Summary: This paper studies a practical problem of test-time adaptation where non-interest testing samples may appear and mislead the adaptation. This problem is quite serious in practical applications, and the problem setting is relatively novel. To address this problem, the authors propose the SoTTA method, which solves this problem in two aspects. For input-wise robustness, SoTTA filters out the non-interest samples with a high-confidence uniform-class memory buffer. For parameter-wise robustness, entropy-sharpness minimization is adopted to ensure that the landscape is smooth during the adaptation. The proposed method is evaluated on two benchmark datasets under one standard setting and four robust settings with different types of non-interest samples. The results show that SoTTA gives state-of-the-art performance compared to existing TTA methods. Strengths: 1. This paper studies a novel problem setting of test-time adaptation, where non-interest testing samples may appear and mislead the adaptation. This is a relatively novel and practical problem in real applications. 2. The overall idea of this paper makes sense. This paper tackles the harmful impact of non-interest with a filter and robust loss function. The experiments in the main paper also show that the proposed method gives state-of-the-art performance. Weaknesses: 1. First, this paper ignores one common TTA benchmark, i.e., CIFAR100. Additionally, the results of ImageNet-C have been hidden in the appendix. These key results should be presented in the main paper with comprehensive analysis. 2. One TTA algorithm, EATA [1], adopts a similar sample selection strategy during adaptation. This method should be taken into consideration in the experiment parts because it has the potential to handle the problem of non-interest samples. 3. This paper does not provide any discussion about the selection and sensitivity of threshold C0. Intuitively, the optimal C0 depends on both in-distribution data and out-of-distribution data, and it is not easy to decide in real applications. 4. There is more related work that can be discussed to further improve this paper. For test-time adaptation, the above-mentioned EATA method [1] and recent advanced TTA methods [2-4] about robustness and evaluation should be considered. For out-of-distribution detection, methods [5-6] that can be efficiently optimized in an unsupervised manner should be taken into consideration. **Reference** [1] Shuaicheng Niu, Jiaxiang Wu, Yifan Zhang, Yaofo Chen, Shijian Zheng, Peilin Zhao, Mingkui Tan: Efficient Test-Time Model Adaptation without Forgetting. ICML 2022: 16888-16905 [2] Hao Zhao, Yuejiang Liu, Alexandre Alahi, Tao Lin: On Pitfalls of Test-Time Adaptation. ICML 2023 [3] Zhi Zhou, Lan-Zhe Guo, Lin-Han Jia, Dingchu Zhang, Yu-Feng Li: ODS: Test-Time Adaptation in the Presence of Open-World Data Shift. ICML 2023 [4] Tong Wu, Feiran Jia, Xiangyu Qi, Jiachen T. Wang, Vikash Sehwag, Saeed Mahloujifar, Prateek Mittal: Uncovering Adversarial Risks of Test-Time Adaptation. ICML 2023 [5] Zhi Zhou, Lan-Zhe Guo, Zhanzhan Cheng, Yu-Feng Li, Shiliang Pu: STEP: Out-of-Distribution Detection in the Presence of Limited In-Distribution Labeled Data. NeurIPS 2021: 29168-29180 [6] Jiangpeng He, Fengqing Zhu: Out-Of-Distribution Detection In Unsupervised Continual Learning. CVPR Workshops 2022: 3849-3854 Technical Quality: 1 poor Clarity: 3 good Questions for Authors: 1. The experiment shows that RoTTA outperforms existing TTA methods even in standard settings. Can the authors explain why this phenomenon occurs? Intuitively, the techniques mentioned in this article are all aimed at addressing the problem of non-interesting samples. What factors enable them to achieve better performance when non-interesting samples do not exist? 2. Can the authors provide some sensitivity analysis regarding the threshold C0? It is very important for the actual effect of filtering out non-interesting samples. An adaptive scheme for setting C0 would also be acceptable. 3. One naive way to solve the proposed setting is to adopt an out-of-distribution process before test-time adaptation. How does this naive method work in the proposed setting? **[IMPORTANT]** I would like to raise my score if you successfully address my concerns in the Weakness and Question sections. Overall, this paper is interesting and I appreciate it. However, it is currently slightly below the level of acceptance. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 1 poor Presentation: 3 good Contribution: 2 fair Limitations: The authors have properly discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your time and effort in providing us with positive comments. We respond to your question in what follows. Please also refer to the *global response* we have posted together. --- **Weakness 1.** In our current manuscript, the CIFAR100 dataset acted as one of the non-interest scenarios ("Near") as it has similar characteristics to CIFAR10 and ImageNet, as detailed in Section 4 of our manuscript. However, we acknowledge the importance of the CIFAR100 benchmark. We will conduct experiments on the CIFAR100 dataset and include the results in the final manuscript. For your concerns regarding ImageNet-C, we put its result in the appendix due to space limitations. We will restructure the evaluations section to incorporate the ImageNet-C result in the main paper. **Weakness 2.** Thank you for your valuable suggestion. We include the results of EATA and discussions in global response #1. To summarize, we found that EATA shows significant accuracy degradation with the presence of non-interest samples (e.g., 82.4% -> 36.6% for Noise). **Weakness 3.** Thank you for your valuable suggestion. We include the sensitivity of the hyperparameter C0 and discussions in global response #2. **Weakness 4.** We want to highlight that our problem setting and approach differ from those studies. EATA [1] is primarily concerned with preventing catastrophic forgetting, which differs from our goal of progressively adapting to new test samples of interest. We have experimented with EATA in global response #1. DIA [4] 's primary focus is designing an attack method for TTA, which we use as the attack scenario in our experiment (Section 4 of our manuscript). TTAB [2] and ODS [3] are tailored to address distributional shifts but do not consider non-interest samples. The focus of OOD methods [5, 6] is the detection of OOD samples within the same domain, which is not suitable for TTA scenarios. We have included the experimental results and limitations of OOD detection methods in response to your Question 3 below. We will include discussions on these studies in our final manuscript. **Question 1.** We assume the reviewer asked about our algorithm (SoTTA), not RoTTA. Our interpretation of this result is two-fold. First, our high-confidence uniform-class sampling strategy filters not only non-interest samples but also benign samples that would negatively impact the algorithm’s objective. This implies that there exist samples that are more beneficial for adaptation, which aligns with the findings that high-entropy samples harm adaptation performance [1]. Second, entropy-sharpness minimization helps ensure both robustness to non-interest samples and generalizability of the model by preventing model drifts, leading to performance improvement with benign samples. This is also consistent with SAR [r1], which argues that the entropy-sharpness minimization is robust to large gradients, which could harm the adaptation performance. **Question 2.** Thank you for your valuable suggestion. We include the sensitivity of the hyperparameter C0 and discussions in global response #2. **Question 3.** Following your suggestion, we conducted additional experiments using one of the famous OOD detection algorithms, ODIN [r2], in our noisy data streams. Specifically, we filtered OOD samples detected by ODIN and performed TTA algorithms on the samples left. Similar to prior studies on OOD, ODIN uses a thresholding approach to predict whether a sample is OOD. It thus requires validation data with binary labels indicating whether it is in-distribution or OOD to decide the best threshold. However, in TTA scenarios, validation data is not provided, which makes it hard to apply OOD algorithms directly in our scenario. We circumvented this problem using the labeled test batches to get the best threshold. Following the original paper, we searched for the best threshold from 0.1 to 0.12 with a step size of 0.000001, which took over 20,000 times longer than the original TTA algorithm. The result (Table 8 in the one-page PDF) shows that the impact of discarding OOD samples with ODIN (with-ODIN) is negligible, yielding only a 0.331% improvement in the average accuracy despite a huge computation cost. There exist practical limitations of OOD detection algorithms for TTA. (1) OOD methods assume that a model is fixed during test time, while a model changes continually in TTA. (2) As previously noted, most OOD algorithms require labels for validation data unavailable in TTA scenarios. Even using the same test dataset for selecting the threshold, the performance improvement was marginal. (3) Low performance possibly results from the fact that OOD detection studies are built on the condition that training and test domains are the same, which differs from TTA’s scenario. These collectively make it difficult to apply OOD detection studies directly to TTA scenarios. We discussed these limitations in Section 5 of our manuscript. We will include the whole result in our appendix in our final manuscript. [r1] Niu, Shuaicheng, et al. "Towards Stable Test-time Adaptation in Dynamic Wild World." ICLR ‘22 [r2] Liang, Shiyu, Yixuan Li, and R. Srikant. "Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks." ICLR ‘18 --- Rebuttal Comment 1.1: Comment: Your rebuttal makes sense. I hope you can add the corresponding results and include discussions on the listed studies in the final manuscript. I decide to raise my score. --- Reply to Comment 1.1.1: Title: Thank you for your response Comment: Thank you for your response to our rebuttal! We will add the results and discussions in the rebuttal to the final manuscript. Thank you again for your valuable feedback and suggestions. Best, Authors.
Summary: This article presents a new Test-Time Adaptation (TTA) scenario, wherein the model is adapted to noisy test streams. To address the challenges posed by this scenario, the paper introduces the Screening-out Test-Time Adaptation (SoTTA) algorithm, which leverages input-wise and parameter-wise robustness. The effectiveness of SoTTA is demonstrated through extensive comparison experiments conducted on TTA benchmarks. Strengths: 1. This work proposes a novel test time adaptation setup, which considers the noisy data (non-interest) in real world application during test phase, whose motivation is convincing. 2. The significant performance gain shows SoTTA addresses the challenges under noisy test stream well. Weaknesses: 1. The initial component of the method is inspired by the memory bank concept introduced in RoTTA, albeit in a simplified form. The second component, pioneered by SAR[1], represents a novel approach within the TTA domain. However, it is worth noting that the methodology section may lack originality. 2. The sensitivity analysis and selection criterion for key hyperparameters, specifically $m,C_0$ , are not included in the current study. 3. The motivation of this paper stems primarily from experimental findings, and it would be advantageous to analyze the motivation or methods from a theoretical perspective. 4. The paper contains several typographical errors, and it would be valuable to thoroughly revise it. [1] Towards stable test-time adaptation in dynamic wild world Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. What would be the outcome if CSTU and ESM were combined? Would the performance surpass that of SoTTA? 2. It is acknowledged that in real-world scenarios, the non-interest scenes mentioned in the experimental section may occur simultaneously. Have any relevant experiments been conducted to address this? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Actually, the problem setting is interesting to some extent, however, the techniques in this paper are similar to several compared methods, which may limit the novelty of this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your time and effort in providing us with positive comments. We respond to your question in what follows. Please also refer to the *global response* we have posted together. --- **Weakness 1.** We appreciate the reviewer's insightful comment. Regarding the memory bank concept introduced in RoTTA, it is essential to note that the fundamental objective of RoTTA differs from our proposed method. RoTTA's memory bank primarily focuses on maintaining recent high-confidence samples. However, without a dedicated filtering-out method for low-confidence samples, RoTTA fails to avoid non-interest samples, especially in the early stage of test-time adaptation. In contrast, our confidence-based memory management scheme effectively rejects non-interest samples, and thus it rejects potential model drift from the beginning of TTA scenarios. As a result, our approach outperforms RoTTA in noisy test streams, as shown in Table 1 of our manuscript. Regarding the second component, we acknowledge that both SoTTA and SAR utilize sharpness-aware minimization proposed by Foret et al. [r1]. However, the motivation behind using SAM is different. While SAR intends to avoid model collapse when exposed to samples with large gradients, we aim to enhance the model's robustness to non-interest samples with high confidence scores. As illustrated in Figure 6 in the manuscript, we observed that entropy-sharpness minimization effectively prevents the model from overfitting to non-interest samples. **Weakness 2.** Thank you for your valuable suggestion. We include the sensitivity analysis of the hyperparameters at global response #2. **Weakness 3.** We appreciate your suggestion. The current manuscript highlights the practical issues of noisy data streams in real-world TTA scenarios from experimental findings and proposes a methodology to address this problem effectively. Here, we provide the theoretical motivation for handling noisy data streams in TTA scenarios. With the Bayesian-learning-based frameworks [r2, r3], we can express the posterior distribution of the model in terms of training ($D$) / benign test data ($B$) in test-time adaptation: Equation 1. $\log p(\theta | D, B) = \log q(\theta) - \frac{\lambda_B}{|B|} \sum_{b=1}^{|B|} H(y_b | x_b)$ The posterior distribution of model parameters depends on the prior distribution ($q$) and the average of entropy ($H$) of benign samples with a certain weight ($\lambda$). Here, we incorporate the additional noisy data stream ($N$ into Equation 1 and introduce the new posterior distribution for our target scenario with noisy streams: Equation 2. $\log p(\theta | D, B, N) = \log q(\theta) - \frac{\lambda_B}{|B|} \sum_{b=1}^{|B|} H(y_b | x_b) - \frac{\lambda_N}{|N|} \sum_{n=1}^{|N|} H(y_n | x_n)$ We can now derive model parameter variations for benign and non-interest test samples: Equation 3. $\log p(\theta | D, B) - \log p(\theta | D, B, N)= \frac{\lambda_N}{|N|} \sum_{n=1}^{M} H(y_n | x_n)$ Equation 3 implies that the (1) model adapted only from benign ($B$) and (2) model adapted with both benign ($B$) and noise ($N$) differs by the amount of the average entropy of noisy (non-interest) samples. The equation also suggests that a high entropy from severe noise would result in a significant model drift in adaptation. We appreciate the reviewer's constructive feedback. We will incorporate and discuss theoretical analysis further in our final manuscript. **Weakness 4.** Thank you for your careful examination of our paper. We apologize for the typographical errors in the initial submission, and we will thoroughly revise the manuscript. **Question 1.** Here, we compare the original SoTTA with the CSTU+ESM version (under the same setting as Table 1 in our manuscript). | | Benign | Near | Far | Attack | Noise | Avg | |---|:---:|:---:|:---:|:---:|:---:|:---:| | SoTTA | 82.9±0.4 | 81.4±0.5 | 81.6±0.5 | 84.5±0.3 | 81.0±1.5 | 82.3 | | CSTU+ESM | 83.0±0.2 | 81.0±0.0 | 78.5±0.2 | 83.7±0.2 | 78.7±0.9 | 81.0 | The above table shows that CSTU shows comparable accuracy to SoTTA (HUS+ESM) without non-interest samples (Benign). However, CSTU shows 2.3x higher performance degradation with Noise samples (- 4.3%p) than SoTTA (-1.9%p). This result originates from CSTU's structure of only maintaining the "recent high-confidence samples." CSTU does not discard any low-confident samples when the memory is not full, which is susceptible to harmful samples in the early adaptation stage. **Question 2.** Our methodology filters non-interest samples, which could be easily extended to simultaneous non-interest scenes. We expect to further develop robust TTA methods in more complex settings, such as mixed-domain, shuffled-class scenarios [r4]. [r1] Foret, Pierre, et al. "Sharpness-aware Minimization for Efficiently Improving Generalization." International Conference on Learning Representations. 2020. [r2] Brahma, Dhanajit, and Piyush Rai. "A Probabilistic Framework for Lifelong Test-Time Adaptation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. [r3] Yves Grandvalet and Yoshua Bengio. "Semi-supervised learning by entropy minimization." Advances in neural information processing systems, 17, 2004. [r4] Marsden, Robert A., Mario Döbler, and Bin Yang. "Universal Test-time Adaptation through Weight Ensembling, Diversity Weighting, and Prior Correction." arXiv preprint arXiv:2306.00650 (2023). --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer gePD Comment: Thanks for the detailed response, and some of my concerns have been addressed. Indeed, the test time setting on Noisy Data Streams is interesting and challenging. However, I still believe the techniques in this paper are similar to several SOTA TTA methods, and I will keep my score. --- Reply to Comment 1.1.1: Title: Thank you for your response Comment: We are pleased that our rebuttal addressed your concerns. We also appreciate your acknowledging that our problem setting is interesting and challenging. Regarding the techniques, we still believe that our method is carefully designed and has novel components to tackle the challenging problem, as we wrote in our rebuttal. As a result, our method advances SOTA TTA methods with notable performance improvements (e.g., on average, +14.58%p better than SAR and +5.82%p better than RoTTA on CIFAR10-C). Thank you again for your response. We will strengthen our paper based on your valuable comments and feedback. Best, Authors
Summary: This paper proposes screening-out test-time adaptation which is claimed robust to non-interest samples. It filters out the impact of non-interest samples with a high-confidence uniform-class sampling. It proposes entropy-sharpness minimization to deal with large gradients. Experiments are completed on CIFAR10-C and ImageNet-C. Strengths: 1. The proposed problem (screening-out test-time adaptation) is novel and sharp. 2. This paper has a good presentation. For example, Fig.1 and Fig.2 are very clear. 3. Entropy-sharpness minimization seems reasonable as discussed in Section 3.2. 4. Experiments are completed on both small and large datasets including ImageNet-C. As shown in Tab.1, the improvement is good enough. Weaknesses: 1. The problem is interesting but the proposed two methods are straightforward. I am a little worried about the technical novelty. 2. The design of input-wise robustness is hard coding. Although some samples are bad for optimization, they may be essential for the perception. 3. Only classification is completed. However, segmentation is also very important in autonomous driving scenarios. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Can it also work on segmentation and detection? Please discuss. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please refer to "weakness". Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your time and effort in providing us with positive comments. We respond to your question in what follows. Please also refer to the *global response* we posted together. --- **Weakness 1. The problem is interesting but the proposed two methods are straightforward. I am a little worried about the technical novelty.** We appreciate the reviewer's insightful comment. Regarding high-confidence uniform sampling, RoTTA could be compared with our method. Regarding the memory bank concept introduced in RoTTA, it is essential to note that the fundamental objective of RoTTA differs from our proposed method. RoTTA's memory bank primarily focuses on maintaining recent high-confidence samples. However, without a dedicated filtering method for low-confidence samples, RoTTA fails to avoid non-interest samples, especially in the early stage of test-time adaptation. In contrast, our confidence-based memory management scheme effectively rejects non-interest samples, and thus it rejects potential model drift from the beginning of TTA scenarios. As a result, our approach outperforms RoTTA in noisy test streams, as shown in Table 1 of our manuscript. Regarding the second component, we acknowledge that both SoTTA and SAR utilize sharpness-aware minimization proposed by Foret et al. [r1]. However, we clarify that the motivation behind using SAM is different. While SAR intends to avoid model collapse when exposed to samples with large gradients, we aim to enhance the model's robustness to non-interest samples with high confidence scores. As illustrated in Figure 6 in the manuscript, we observed that entropy-sharpness minimization effectively prevents the model from overfitting to non-interest samples. **Weakness 2. The design of input-wise robustness is hard coding. Although some samples are bad for optimization, they may be essential for the perception.** Low-confidence samples could be beneficial for the perception. However, we want to note the risks of model corruption associated with including such low-confidence samples in the selection. In our discussion of the effect of hyperparameters, as detailed in the global response #2, we observed that lowering the confidence threshold (C0) resulted in performance degradation (e.g., 81.0% (C0=0.99) -> 77.7% (C0=0.7) in Noise scenario; see Figure 8 in the attached PDF). This observation supports the need for a cautious approach in dealing with low-confidence samples. In light of your feedback, we will incorporate this discussion of the trade-off between the perception and risks of model corruption in our filtering strategy into the manuscript. **Weakness 3. Only classification is completed. However, segmentation is also very important in autonomous driving scenarios. / Question 1. Can it also work on segmentation and detection? Please discuss.** We appreciate your insightful comment. While our work primarily focuses on classification tasks, we acknowledge the importance of image segmentation and object detection, particularly in autonomous driving scenarios. For image segmentation, when non-interest objects are present in the input, the model might produce noisy predictions on those pixels, leading to detrimental results. Extending our method to operate at the pixel level would allow it to be compatible with the segmentation task while minimizing the negative influences of those noisy pixels on model predictions in test-time adaptation scenarios. Similarly, our method could be tailored to object detection's classification (recognition) task. For example, in the context of the YOLO framework, our method could filter and store grids with high confidence for test-time adaptation, enhancing detection accuracy. However, our current approach must address the localization task (bounding box regression) during test-time adaptation. Implementing this feature is non-trivial and would require careful consideration and potential redesign of certain aspects of our methodology. Accurately localizing bounding boxes during test-time adaptation presents an exciting avenue for future research. We sincerely appreciate the reviewer's insightful comment and will incorporate this discussion into our manuscript.
Rebuttal 1: Rebuttal: ## Global Response Dear Reviewers, We sincerely appreciate your efforts and time in reviewing our manuscript. The contribution of our work lies in investigating the crucial yet unexplored challenge of test sample diversity in real-world scenarios. Notably, we unveil that existing TTA algorithms suffer from significant performance degradation with such noisy streams. As a solution, we propose SoTTA that is robust to the sample diversity and outperforms the state-of-the-art baselines. We appreciate the reviewer's constructive and insightful comments on our paper. To answer your concerns and questions, we have written the responses with the following additional experiments and clarification: Additional experiments on EATA and RoTTA to clarify the differences between the methods and our approach (6Z6q, HUhd, gePD) Sensitivity analysis of the hyperparameters (gePD, 6Z6q) Clarification of the effectiveness of our input-wise robustness method compared with other existing approaches (kiEt, gePD) Clarification of the difference between SoTTA and SAR (gePD, HUHd) Theoretical analysis regarding the motivation for handling noisy data streams in TTA scenarios (gePD) Additional experiment to compare our method with an OOD detection method (6Z6q) Clarification of our evaluation scenario with CIFAR100 (6Z6q) Comparison between our method and relevant TTA methods to clarify the focus of our work (6Z6q) Discussion of the possibilities of applying our methods in broader scenarios such as image segmentation and object detection (kiEt) Additionally, we summarize two major issues as follows: ### 1. Comparison with EATA [c1] (Reviewer 6Z6q and HUhd) Two reviewers (6Z6q and HUhd) identified EATA [c1] as a critical baseline for its similarity in sample selection with our approach. In response, we evaluated EATA under two conditions on our noisy data stream scenario with CIFAR10-C: (1) EATA, where we applied the method directly to the noisy data stream, encompassing Fisher importance calculation and model adaptation in the presence of non-interest samples, and (2) EATA-Clean, where we gave EATA an advantage by excluding the non-interest samples when calculating the Fisher information matrix. We ensured that the experimental settings remained consistent with those described in our original submission, as noted in Table 1 of our manuscript. The experiment results are presented in the table below: | | Benign | Near | Far | Attack | Noise | Avg | |---|:---:|:---:|:---:|:---:|:---:|:---:| | SoTTA | 82.9±0.4 | 81.4±0.5 | 81.6±0.5 | 84.5±0.3 | 81.0±1.5 | 82.3 | | EATA-Clean | 82.4±0.2 | 64.4±1.5 | 57.6±2.6 | 70.9±0.7 | 36.6±1.2 | 62.4 | | EATA | 82.4±0.2 | 63.9±0.4 | 56.3±0.5 | 70.9±0.6 | 36.0±0.8 | 61.9 | From the results, we observed a significant accuracy degradation for EATA in the presence of non-interest samples, dropping from 82.4% to 36.6% when confronted with noisy data (Noise). Our proposed SoTTA algorithm utilizes high-confidence sampling, effectively excluding a more significant number of non-interest samples. SoTTA filtered out 99.98% of non-interest samples, whereas EATA only excluded 33.55% in the memory-filling stage. In addition, EATA calculates the Fisher importance based on a few initial test-time samples (e.g., 2000 samples [c1]) to calculate parameter importance and prevent catastrophic forgetting. This approach becomes problematic as including non-interest samples in the initial sample set corrupts the Fisher importance values, resulting in a degradation of accuracy (EATA-Clean 62.4% -> EATA 61.9%). ### 2. Effect of hyperparameters C0, m (Reviewer gePD and 6Z6q) In response to the reviewers' (gePD and 6Z6q) request for the sensitivity analysis of hyperparameters, we conducted experiments to present the results for two specific hyperparameters: confidence threshold (C0) and BN momentum (m). Notably, we varied C0 within the range of [0.7, 0.999] and m within [0.05, 0.3] and reported the corresponding accuracy. Please refer to the detailed results in the attached PDF file. Confidence threshold (C0) (Figure 8a in the one-page PDF): Our result shows that the selection of C0 shows similar patterns across different scenarios (Benign ~ Noise). The result illustrates a tradeoff; a low C0 value does not effectively reject non-interest samples, while a high C0 value filters benign data. We found a proper value of C0 (0.99) that generally works well across the scenarios. Also, we found that the optimal C0 depends primarily on in-distribution data. Our interpretation is that setting different C0 values for CIFAR10 and ImageNet is straightforward as they have a different number of classes (10 vs. 1000), which leads to different ranges of the model’s confidence. BN momentum (m) (Figure 8b in the one-page PDF): Across the tested range, the variations in performance were found to be negligible. This finding indicates that choosing a low momentum value from within the specified range ([0.05, 0.3]) is adequate to maintain a favorable performance. Please note that setting a high momentum would corrupt the result, which is implicated by the algorithms directly utilizing test-time statistics (e.g., TENT) suffering from accuracy degradation with noisy data streams (e.g., TENT: 81.0% -> 52.1% for Noise at Table 1 in the manuscript). We will incorporate these findings and discussions into the final manuscript. Thank you again for your thoughtful review, and we are open to any further suggestions or questions you may have. [c1] Shuaicheng Niu, Jiaxiang Wu, Yifan Zhang, Yaofo Chen, Shijian Zheng, Peilin Zhao, Mingkui Tan: Efficient Test-Time Model Adaptation without Forgetting. ICML 2022: 16888-16905 Pdf: /pdf/f1c90d9c6f8bb5193cf7257381607489fae7c1f7.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
A Data-Free Approach to Mitigate Catastrophic Forgetting in Federated Class Incremental Learning for Vision Tasks
Accept (poster)
Summary: In this paper, authors propose a new approach to perform class-incremental learning in federated setting. Their approach uses a generative model trained at the server side which generates synthetic images to be used as a replacement for data corresponding to old classes. The authors claim through empirical results that their approach outperform all the earlier approaches for this problem. Strengths: This paper clearly articulates the class incremental training in a federated learning setting and the related challenges. The paper also clearly explains different loss functions used for training the generative model and the client training paradigms used in the proposed approach. Ablation study is also performed to break down the gains into different constituents of the approach. Weaknesses: On page 3, authors mention that the main difference between FedCIL and their approach is that generative model in the proposed approach is trained in a data free manner which can reduce client's training time and computation. It's not clear how does this difference leads to such better performance in the experimental results. It will be nice to have some discussion and experiments to explain. As mentioned on page 4, line 177, the generative model produces samples resembling the original training inputs and given that this model is transmitted to all the clients, why this is not a privacy issue? It will be good to analyze any potential privacy leakage through the shared generative model. It's not clear the SuperImageNet dataset claimed by the authors, is it just a regrouping of the ImageNet Dataset or something more substantial. In Fig. 5, it's quite surprising to see the such a poor performance of FedCIL compared to FedAvg and FedProx which are not even designed for this problem. There is not much discussion in the paper on this. Specifically, are these results over one run or multiple runs? What hyper parameter thing has been performed for these approaches. In Fig. 5, performance of MFCL is falling the most as we go from CIFAR-100 to tinyImageNet to SuperImageNet although the number of samples per task is increasing. It will be good to understand why is this observed? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Please see above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for reading the details of our paper and appreciate their comments. We are grateful for their insightful questions and try to address their concern in detail: --- > FedCIL compared to MFCL and other methods We have briefly discussed the differences between FedCIL and MFCL and our hyperparameters in section 7 of the appendix. FedCIL is a GAN-based method where all the participating clients train the discriminative model to align with their local generative model. In the original paper, the authors show successful results for smaller datasets (MNIST and CIFAR10) and a small number of clients (=5). We believe by increasing the dataset’s difficulty and reducing the number of examples per client; this method would face the following challenges; 1. GAN models require training on a large amount of data for a long time. However, in scenarios like CIFAR100 with 50 clients, each client has a few examples of each class. 2. GAN are commonly known to be very sensitive to hyperparameters, especially in a decentralized setup. **Hyperparameters.** To train FedCIL for TinyImageNet and SuperImageNet, we tried SGD and Adam optimizers with learning rates $\in${0.1, 0.05, 0.01} and local epoch $\in$ {1, 2}. For a fair comparison, we adopt a generative model architecture with the same input dimension and a total number of parameters as MFCL. Unfortunately, the model did not converge to good performance with the mentioned hyperparameters. We want to highlight that training FedCIL is substantially more expensive compared to other methods, as individual clients have their local generative model. Although other hyperparameters may improve the results, our evaluation indicates the difficulty of the hyperparameter tuning for FedCIL. It is worth mentioning that in order to train the CIFAR-100 dataset, we used a local epoch $8 \times$ larger than the other baselines; otherwise, the performance on this dataset would also degrade. Having this said, we believe that FedCIL is designed for cross-silo settings where each client has a lot of computational power and training data. On the contrary, MFCL works well across both cross-silo and cross-device settings. > Privacy implications of sharing the generative model In our method, the server relies only on the aggregated global model to train the generative model. Since this global model is already shared with the clients in any standard FL framework (such as FedAvg), clients can anyway train such generative models locally and synthesize data. As a result, we believe our method generally does not introduce any *additional privacy* issues. It is still possible to conduct other attacks, such as model/data poisoning, that are common in FedAvg. Our claim about privacy is compared to other federated continual learning methods where the clients need to share a locally trained generative model or perturbed private data, causing more privacy problems. It is noteworthy that common defense mechanisms for FedAvg, such as secure aggregation, are also applicable to our algorithm. In MFCL, the server does not require access to the individual client's updates and uses the aggregated model for training. Therefore, training a generative model is still viable after incorporating these defense mechanisms. > Performance of MFCL from CIFAR-100 to tinyImageNet to SuperImageNet. We thank the reviewer for their observation. We believe the main reason behind this performance degradation is the difficulty of the datasets. SuperImagenet resolution is 3 * 224 * 244, while this value is 3 * 32 * 32 for CIFAR100. This difference in quality causes two types of problems; 1- Training a model on SuperImageNet is more difficult. Even in the centralized setting with all the data present (no FL or CL), the performance of ResNet18 is better on CIFAR100. 2- As the data resolution increases, generating high-quality synthetic data becomes more challenging. To address the first problem, clients need to train a larger discriminative model and to resolve the second issue, we should increase the generative model size. In our experiments, we employed the common backbone architecture for the global model, which is more suitable for cross-device FL. > SuperImageNet dataset SuperImageNet is a subgroup and regrouping of the ImageNet dataset. We will clear this in the main manuscript to avoid confusion. --- We sincerely thank the reviewer for their time, valuable feedback, and questions. We hope to have addressed their concerns and look forward to engaging in further discussion. If the reviewer finds our response satisfactory, we kindly ask them to revisit their evaluation. --- Rebuttal Comment 1.1: Title: Updated rating Comment: I have updated my rating based on the responses above by authors. Thanks! --- Reply to Comment 1.1.1: Title: Thanks reviewer zVmx Comment: We appreciate the reviewer for their constructive questions/comments, finding our rebuttal satisfactory, and raising their rating. We would like to kindly ask if there are any remaining concerns from the initial review so that we might have the opportunity to address those. Once again, thanks for the time to review our work.
Summary: This paper introduces a framework for federated class incremental learning, which employs a generative model trained at the server. This model is then utilized by the client to generate samples from previous distributions to mitigate catastrophic forgetting. Strengths: This paper addresses the challenging problem of CIL (Class Incremental Learning) within the FL (Federated Learning) framework. It introduces a novel approach of data-free generative model training in the FL framework. Furthermore, the paper proposes a new benchmark dataset for FCIL called SuperImageNet. The effectiveness of the proposed method is demonstrated through extensive experiments. Weaknesses: This paper uses small-scale model and generative model. It is uncertain how the communication cost and local computational cost will be affected when scaling up and using a generative model. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) Is it practical for all clients to share same task transitions at a specific point in time? 2) If storing data locally becomes an issue, the local model update should be performed in a single iteration, and the samples should be discarded afterward. Allowing multiple iterations implies that data has already been stored locally, thus raising privacy concerns associated with data storage. If limited memory resources are the problem, it is understandable. However, there is a concern about whether the generative model will maintain affordable computational complexity as the scale increases. 3) If the server maliciously trains the data-free generative model using a specific local model instead of the aggregated model, wouldn't this lead to privacy issues? 4) Is the L_KD in Eq. (8) and Eq. (9) same with the L_KL in Table 5? 5) At each task boundary, how did you validate the trained generative model? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I have some concerns about the communication cost and local computational cost that may arise when using a generative model at scale. It remains uncertain whether these costs will be negligible still. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for reading the details of our paper and appreciate their comments highlighting the novelty of our method and our extensive empirical results as the strengths of the paper. We are grateful for their insightful questions and try to address their concern in detail: --- > Practicality of sharing the same task transitions. We want to highlight that in the majority of existing FL studies, client data remains static throughout the entire process. We believe our paper and, in general, federated continual learning is a step towards making FL more realistic where we can observe the impact of change in local data. Undoubtedly, the next step is to relax the setting and let the users’ data change anytime. In addition, tasks in centralized continual learning are usually isolated and do not overlap to ensure we only observe the challenges of forgetting. Following the same standard practice, we also assume that clients change their training data at specific times. In practice, clients and the server can establish an agreement on task transitions to improve the global model’s performance. The server can handle task transitions in this setup, and clients can change their local tasks according to the agreement. For example, consider multiple hospitals training a shared model using patient data. The server can analyze the historical data to identify trends, determine specific timestamps, and ask the clients to train the model on the most common illnesses in the seasons. > Effect of generative model scale on communication and computation costs. In general, mitigating forgetting requires additional mechanisms on top of traditional FL and introduces new costs in terms of communication, computation, or memory. **Comparison with the cost of prior work** *MFCL scales better with the number of tasks*. In memory-based techniques, clients need to assign a fixed memory to all the tasks (which requires deleting old data as the number of tasks increases and, therefore, degrades performance) or a fixed memory per task (the total memory increases linearly by tasks). In contrast, the generative model size is independent of the number of tasks. *MFCL clients can delete the model*. The generative model is solely required during training, and clients can delete them afterward. Conversely, memory-based techniques require clients to save the data at all times because once deleted, that data become unavailable. **Potential ways to reduce the costs of generative models** There are different design choices in MFCL that one can exploit to trade off these costs. To name a few; *Computation cost.* MFCL, in its current form, requires clients to generate synthetic data in every batch. Clients can reduce this cost by generating and storing synthetic images once (only for training) and deleting them afterward. To further reduce the cost, clients can request the server to generate and send them synthetic images instead of the generative model. *Communication cost.* The discussed methods can also reduce communication costs. Another alternative is to send the generative model once per task and store it on the client side. Existing techniques, such as pruning, can further reduce the model size while preserving performance. *Resource-limited clients*. In terms of scaling the generative model, if there are resource-limited clients among the participants, the server can train different generative models with different qualities. Clients can then select the proper generative model based on their restrictions. > Privacy concerns associated with data storage. There are different regulations and rules that limit the storage time of users’ data. Service providers are obligated to erase the data eventually after a specific period. Sometimes, the data is available only in the form of a stream and never gets stored; then, we can only use it once for training. However, most of the time, the regulations allow the service provider to store the data for longer, which is enough to perform a few training rounds. After this period, the local data will change, but our method helps the model preserve its knowledge of the unavailable data from old tasks. > Malicious server This is a great question, and we thank the reviewer for pointing this out. FedAvg and other common aggregation methods where the server has access to clients’ updates are susceptible to such attacks. The crucial point is that MFCL trains the generative model based on the aggregated model, which is already available to all clients and servers in the case of FedAvg. So, MFCL is vulnerable to the same set of attacks as FedAvg (including the server accessing clients’ updates) but does not introduce additional problems. In contrast, prior work requires clients to share a locally trained generative model or perturbed private data, potentially causing more privacy issues. **Secure aggregation.** A potential solution when clients do not trust the server with their updates is “Secure Aggregation”. In summary, secure aggregation is a defense mechanism that protects update privacy from malicious servers. Since MFCL does not require individual updates, it is compatible with secure aggregation and can be employed to align with this dense. > How to validate the generative model. We validate the generative model in different ways: 1- Monitoring the trends in each loss term to verify if they are decreasing. 2- Examining the quality of the synthetic data. 3- Evaluating the final performance of the global model. > Are L_KD in Eq. 8, 9 and L_KL in Table 5 equal? This is a typo, and we will fix it in the paper. --- We sincerely thank the reviewer for their time, valuable feedback, and questions. We hope to have addressed their concerns and look forward to engaging in further discussion. If the reviewer finds our response satisfactory, we kindly ask them to revisit their evaluation. --- Rebuttal Comment 1.1: Comment: Dear authors, Please feel free to add further clarification to reviewer NkkA's comments/questions. Best, AC --- Reply to Comment 1.1.1: Title: Thanks Area Chair Bkb2 and reviewer NkkA Comment: We appreciate the AC for following up on our rebuttal and thank the reviewer for their helpful reviews. However, since we have not received any response from reviewer NkkA on our rebuttal, we are not sure which of their concerns remain and require additional clarification. It would be greatly appreciated if the reviewer could kindly let us know if our rebuttal has properly addressed their concern. Here, we provide a summary of our rebuttal; 1. **Communication and computation costs at scale**. The extra cost of CL methods is to help clients mitigate forgetting. In our rebuttal, we explained generative models (as we use in our work) scale better than memory-based techniques since (1) the generative model's size is independent of the number of tasks and (2) clients can delete the model after training. In addition, we provided a few ways to reduce the extra overhead in communication and computation. 2. **Privacy concerns associated with data storage**. Data retention policies, in general, are concerned with the purpose and the duration of storing personal data. For example, one of the principles of GDPR (General Data Protection Regulation) requires data to be removed after it is processed for its stated purpose. The permitted storage duration can vary based on the data type and business. However, this duration can be long enough to train the local model for a few iterations. 3. **Malicious server**. A common solution to the mentioned malicious server attack is secure aggregation. We designed MFCL to be compatible with secure aggregation and other common defense mechanisms. The key is that MFCL does not require access to individual client updates and uses the aggregated model for training. Therefore, training the generative model is still viable after incorporating these defense mechanisms. 4. **Sharing the same task transitions**. We share this property with recent federated continual learning papers (e.g., FedCIL[1] @ ICLR'23) and follow the standard practice in centralized continual learning to measure the effectiveness of our approach. We believe this setting is closer to practice than conventional FL studies, where the training data does not change. In addition, clients and the server can form an agreement on the transition. [1] D. Qi, et al. "Better generative replay for continual federated learning." ICLR, 2023.
Summary: The work proposes using a server-learned generator for synthetic data replay for federated Class-IL. The method saves client-level compute while preserving data-privacy. The authors show their method outperforms existing methods on 2 existing benchmarks, plus a proposed larger-scale ImageNet benchmark protocol. Strengths: 1) I am already familiar with this setting, but I feel that the authors did a good job at selling the problem-setting to me. While potentially space inefficient, table 1 is very concise and convincing. 2) Moving from multiple client generators -> server generator is very justified, as it will save client compute time. Also, the generator does not need to be communicated from the client to the server, only from the server to the client. 3) The proposed approach does have significantly better performance, especially on the first two datasets. 4) Thank you for the transparency on training costs and server overhead. I do not think the large server overhead is a big deal - since the client costs are minimally affected in T = 1 compared to FedLwF. Weaknesses: 1) The novelty seems to be in more of a high-level idea combination of [45] and federated learning. I am not sure the method is truly impactful from a novelty perspective. 2) SuperImageNet is a protocol benchmark, not a "new benchmark dataset". However, I do appreciate the protocol and agree it is better than CIFAR-100 and TinyImageNet. 3) Overall, the performance is very weak of all methods. I wonder if the impact might be increased with a more realistic setting where past class examples may re-appear in future tasks. Could also consider a small replay buffer and/or a pre-trained model. 4) Often, federated learning papers include some type of theoretical analysis on, for example, convergence. Technical Quality: 3 good Clarity: 3 good Questions for Authors: a) What specifically would you state are your method contributions compared to a federated variant of [45] and similar approaches? b) How do you think your method would perform in more realistic federated CL settings where classes are not Overall, I am worried about the contributed novelty and impact for a venue such as NeurIPS. However, I do think the paper is sound, and in-line with similar works accepted to recent high-tier conferences (e.g., [A]). I am very borderline, but would rather lean towards the accept side. A. Qi, Daiqing, Handong Zhao, and Sheng Li. "Better generative replay for continual federated learning." arXiv preprint arXiv:2302.13001 (2023). Other -> I am not sure that data-free image generation actually protects data-privacy concerns, since you are creating synthetic training data, but, since this line of work is already established, I do not feel the authors need to further justify it. -> Line 48, might be missing a space between "replay" and "[42]" Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The discussion on limitations is concise and complete. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for reading the details of our paper and appreciate their comments highlighting our design choices, superior performance, clarity, and writing as the strengths of the paper. We are grateful for their insightful questions and try to address their concern in detail: --- > Novelty of our method In this paper, we identified the main challenges toward federated continual learning and designed a method to address these challenges. Our framework is **unique** in that it achieves higher accuracy and lower forgetting without requiring any episodic memory and without introducing additional privacy issues compared to the existing alternatives. Our experiment setup is arguably more complex (larger number of clients and larger datasets) than those explored in prior work, such as FedCIL, indicating the effectiveness of our approach. Even though we borrowed ideas from continual learning (including standard data-free knowledge distillation training and the loss function that has been proven to be effective), we are the first to employ these ideas in the context of federated continual learning. We hope the reviewer agrees that our framework is not a straightforward combination of federated learning and continual learning. We could not outperform other state-of-the-art baselines without our novel design choices, such as deciding to train the generative model on the server (which was pointed out by the reviewer as one of the strengths). > Performance in realistic scenarios. Please refer to Figure 2 in the attached pdf for the new experiment results. We define memory as the number of samples from previous tasks. In the paper, we showed the results for $\alpha=1$ and memory size = 0 (meaning that no sample reappears). In the new experiments, we evaluated three different scenarios for the CIFAR100 dataset with 50 clients and evaluated each scenario for two different memory sizes (20 and 50) – each client has 100 samples for each task, so memory size = 50 means 33% of training data is from previous tasks. To pick the reappearing samples, we choose the same number of samples from each previous task and pick the specific sample uniformly at random. **Scenarios** A) non-IID data distribution with $\alpha=1$ B) Highly non-IID data distribution with $\alpha=0.1$ C) Highly non-IID data distribution with $\alpha=0.1$ and dynamic client participation. In this scenario, at the end of each task, 5 clients would leave the training, and 5 new clients would join. **Results.** In all the experiments increasing memory size would improve the final performance. We still see that combining the power of generative models with memory examples has superior performance compared to other baselines. Due to time constraints, we ran the experiments only on one seed. > Theoretical analysis on convergence. In this work, we focus on presenting a more realistic experimental setting with various datasets. However, we believe the convergence analysis of our algorithm is mainly determined by the aggregation mechanism. In our paper, we use FedAvg as the aggregation mechanism in the server, and the convergence of this method is already proved. In FedAvg, the most common local objective function is Cross Entropy (CE). In our algorithm, after task 1, clients add a new term to their objective: $L_{FT}$ and $L_{KD}$. $L_{FT}$ is also a CE loss for synthetic data, and $L_{KD}$ is an MSE loss. Since MSE is L-smooth and $\mu-convex$, we believe adding these two losses does not change the convex property of the original CE loss. It is also commonly assumed that variance and expected squared norm of stochastic gradients of all the clients are bounded. Given that adding MSE loss should not change these properties, we believe we can follow the same proof as [1,2] (or similar analysis) for convergence. However, we leave detailed and formal proof for future work. [1] Li, Xiang, et al. "On the Convergence of FedAvg on Non-IID Data." ICLR. 2019. [2] Charles, Zachary, and Jakub Konečný. "Convergence and accuracy trade-offs in federated learning and meta-learning." International Conference on Artificial Intelligence and Statistics. PMLR, 2021. > SuperImageNet is a protocol benchmark. We agree with the reviewer and will clarify this in the main manuscript. --- We sincerely thank the reviewer for their time, valuable feedback, and questions. We hope to have addressed their concerns and look forward to engaging in further discussion. If the reviewer finds our response satisfactory, we kindly ask them to revisit their evaluation. --- Rebuttal Comment 1.1: Title: Reviewer p34V response to rebuttal Comment: Due to the hard work by the authors in answering my questions (including the new experiments), I have increased my score to weak accept. I still have some minor reservations on novelty/impact, but the paper is very sound with clear contributions and thus I recommend its acceptance. --- Reply to Comment 1.1.1: Title: Thanks reviewer p34V Comment: We thank the reviewer for their valuable questions/comments, finding our rebuttal satisfactory, and raising their score.
Summary: This work introduces MFCL, a method to primarily alleviate the catastrophic forgetting problem that arises in (more realistic) FL settings framed under the continual learning paradigm. In MFCL, the model is split into a generator (only trained on the server side) and a discriminator (only trained by clients). The former is data-free generative model that is trained to output synthetic samples suitable for the task(s) the discriminator is trained (in federation) to perform. This generator is also used by the clients to inject some synthetic training examples into their own local training stages. This component helps counteracting the forgetting problem. The empirical evaluation suggest that MFCL is far superior to other methods. Strengths: The main strength of this paper is that it studies a far more realistic setting in which FL is used in practice for cross-device settings: clients come and go; data in the clients changes over time (more is added, some might be deleted); and, there is no centralised dataset available on the server. To this problem, MFCL proposes a solution that, although borrowing components and ideas from existing works, have been adapted to the FL setting and they work well. Other strengths that I have identified: * The generative model is of a reasonable size (<1M params if my understanding is correct when looking into Table 1 in the Appendix) * The Authors proposed SuperImageNet dataset. Weaknesses: The main weaknesses I see in this work are related to lack of clarity in some important point: * In my view, folks in the FL community might not be familiar with the concept of "task" (common in the CL literature). This means that Sec 3.2 and other parts before might not be so clear. Later in Sec 4 (in line 261 onwards) a clear example of what a _task_ is is presented. Maybe it's worth giving an informal definition earlier? * The main results (i.e. Figure 5) are not super clear how to interpret. What does it mean "shows the accuracy of the model on the observed classes so far" (line 315)? Is the plot generated at `t=10` (i.e. after all tasks are completed)? * Following the comment above: wouldn't it be more informative to have "task" on the x-axis and show how test accuracy (or forgetting) changes as time passes? Presenting the results in such way are more common in some recent CL works I've been reading [1] (see for instance Figures 3,4,5 -- no, I'm not an author :) ) * Figure 3: I think it could be improved adding more details and nomenclature introduced in the text otherwise, what toes the right hand side of Fig3.a tell us that we don't know about all generative training schemes out there? Also, is the "aggregate" green box in Fig3.a outputting the discriminator (yellow rhomboid on the right)? (I think so, so how about connecting them?) [1] https://arxiv.org/abs/2303.11165 Technical Quality: 3 good Clarity: 2 fair Questions for Authors: In addition to the questions I ask in the _Weaknesses_ section, I have a few more: * Maybe include in the Appendix some additional information about the experimental setup: it seems you used PyTorch, as for FL framework I see there are some utility functions from Flower but for the rest I looks you implemented a custom for loop. * Why not going a step further with `SuperImageNet` and also fix the number of clients it can be divided into? In FL papers are often hard to compare because people use datasets (synthetic ones specially) in vastly different ways. In my opinion if you were to fix how many partitions it contains, it would help easing the reproducibility problem of FL and ensure subsequent papers that use `SyperImageNet` do so under a common setup. What do you think? * The role of injecting synthetic images during training (on the client side) could be seen as some form of "alignment" mechanism. This has been presented before in FL [1]. Maybe the Authors could comment whether calling this "alignment" is correct and put it in context with other works? (I only suggested the one that comes to mind immediately, but there are more) * (very minor style comment) wouldn't It be better to have only full page wide figures/tables (like Fig3) instead of half-page ones (like Tab2,3 and Fig 4) ? [1] https://arxiv.org/abs/2202.05963 Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: There are a few limitations that come to mind after reading this paper: * How does this method extend to other domains beyond image classification? or image-based problems in general? (I'm inclined to suggest the title of this paper to include the "image" or "vision" keyword) * In the Discussion, the Authors briefly talk about the privacy implications. I agree that synthetic images do not "resemble any specific training example" but this doesn't have to hold to have some privacy leak. As stated in line 61, the generative model "does not cause any extra data leakage from them [the clients]". But there is a data leak on the discriminator part (just like any other FL method that doesn't apply Differentiable Privacy or similar). Could the Authors comment what would be the implications of adding DP to the discriminator on the client side? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for reading the details of our paper and appreciate their comments highlighting the strengths of our method and finding the setting more realistic and practical. We are grateful for their insightful questions/suggestions and try to address their concerns in detail; --- > Implications of adding DP to the discriminator. This is a very interesting question, and we thank the reviewer for raising this point. We want to highlight that in data-free generative model training, the generator can only be as good as the discriminator. If the global model can learn the decision boundaries and individual classes despite having DP, the generator can learn this knowledge and present it through the synthetic example. Otherwise, if the global model fails to learn the current tasks, there is not much knowledge to preserve for the future. With the DP guarantee, the main challenge is training a reasonable global model. Prior work explores the impact of DP in training the generative model in centralized settings. As an example, [1] shows promising results by training a discriminator with a DP guarantee and training an effective generative model on top of that. [1] Liu, Bochao, et al. "Privacy-Preserving Student Learning with Differentially Private Data-Free Distillation." 2022 IEEE 24th International Workshop on Multimedia Signal Processing. > Relation between MFCL and alignment mechanism. We believe these two lines of work, despite sharing similarities, are mostly orthogonal to each other. Alignment refers to the process of finding a proper permutation of the updates to improve the performance of the aggregated model, whereas in our case, we use a generative model to mitigate forgetting previous tasks. At a high level, both techniques can benefit from shared synthetic data on the client side. However, finding the proper alignment to aggregate does not address the forgetting problem. Our method helps clients to generate data from old tasks (that they may not have access to the data anymore) and inject those samples into their training loop. We may not have understood the question correctly, and appreciate it if the reviewer could correct us if there is a misunderstanding. > Clarifying Figures. **Figure 5.** In Figure 5, we iteratively introduce new tasks. Each task consists of 10, 20, and 5 new classes for CIFAR100, TinyImageNet, and SuperImageNet-L, respectively. To be more specific, the x ticks in all three figures are equivalent to task = {2, 4, 6, 8, 10}. Each point in the plot measures the accuracy of all the classes from the beginning up to that point. For example, the second point from the left shows the accuracy on the first 2 tasks (i.e., 20 classes for CIFAR100). We will clarify this in the paper and will change the x-axis of the figure to show tasks. **Figure 3.** The right-hand side of figure3(a) emphasizes the difference in frequencies of aggregation and training of the generative model. This is important because the latter is potentially more expensive than aggregation, but the server performs it once per task. The output of the aggregation is the global model, which we later use as the discriminator in training the generative model. We will replace the figure in the paper with Fig1 in the attached PDF to clarify this. > Extention to beyond image classification; add "image" or "vision" in the title. Our method relies on training generative models in a data-free manner. Most prior work in this domain focus on vision tasks since images are more resilient to noise. The generated data may not resemble any meaningful entity but still can help to distinguish different classes. Unfortunately, this may not hold for NLP tasks. As a result, the extension of our method to text data is not straightforward. We will definitely incorporate the reviewer’s suggestion, add ‘image’ or ‘vision’ to the title, and discuss this in the paper. > Task definition. We will add the following definition in section 2. Task: in class incremental learning, the training data is presented to the learner as a sequence of datasets - commonly known as tasks. In each timestamp, only one dataset (task) is available, and the learner’s goal is to perform well on all the current and previous tasks. > Additional information about the setup. Our implementation is based on Pytorch 1.13.1 and does not contain any FL framework in the main body. We use FLOWER only for distributing data among clients using LDA. > SuperImageNet with a fixed number of clients. We appreciate the reviewer’s suggestion and will release the dataset with the mentioned property. --- We sincerely thank the reviewer for their time, valuable feedback, and questions. We hope to have addressed their concerns and look forward to engaging in further discussion. If the reviewer finds our response satisfactory, we kindly ask them to revisit their evaluation.
Rebuttal 1: Rebuttal: We thank all the reviewers for their insights and comments on the paper. Here we have attached a PDF that includes the following; * A more clarified version of Figure 3.(a) of the paper for reviewer 4Enn. * More experiment results with increased memory sizes for reviewer p34V. Pdf: /pdf/adb855bf7bb96c91a653f05221ce9e93408cda60.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
On Formal Feature Attribution and Its Approximation
Reject
Summary: The paper proposes and studies a notion of feature attribution in which features are scored for a given instance according to the proportion of minimal explanations for that instance in which they participate. Although an exact computation of this scores can be computationally unfeasible, the paper exploits the minimal-hitting-set duality between abductive and contrastive explanations to approximate them efficiently. Finally, they study their approach empirically over several datasets. Strengths: - The paper is really well written and easy to follow; presentation is excellent - The paper does a good job at showcasing the importance of the problem and providing references to issues that are present in SHAP - The proposed metric is simple and seems promising - The experiments have good results and are presented nicely - The sections on limitations and conclusions provide a nice and helpful starting point for further discussion - References seem appropriate and detailed Weaknesses: - The paper does not provide proofs in the supplementary material for its propositions. While proposition 1 is self-explanatory, proposition 2 is not and should be accompanied with a proof. - Despite mentioning the weighted variant (Definition 2), the paper doesn't seem to do anything with it; there are no theoretical nor practical results about it as far as I can see, and it is only discussed in the appendix. I understand that due to the page limit not everything can fit in the paper, but including the definition of WFFA without doing anything with it in the paper seems like a poor choice to me. - Even though the paper is about formal explainability, and the scores themselves are formally defined, the approximation algorithm doesn't seem to have any formal guarantee, and thus it is unclear to me what the interpretation of the results should be. It seems to me that further theoretical studies are required; how do we know that their approach can't fall into cases where it gives answers that are arbitrarily far from the ground truth? The starting point of the paper is about how methods such as SHAP have pitfalls when certain conditions are met, and yet it is not clear at all that this approach is exempt of similar (or even worse) potential problems. - The experimental data is, unless I am missing something, a bit strange; LIME, SHAP and their FFA approximation compute different things, both in theory and practice, and thus I don't understand at all what the meaning of their comparison is; LIME and SHAP are not approximations to the FFA score, and thus it doesn't feel sound or fair to use them as such. Their idea of taking absolute values and normalizing might make sense, but this is far from obvious and limits how convincing their results are. Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: Although I don't have particular questions to the authors, my perspective (and therefore scores) could change by them convincingly addressing the points raised in the weaknesses section above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 4 excellent Contribution: 2 fair Limitations: Yes; there doesn't seem to be much to address and the authors discuss appropriate points in section 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the comments! In the following, we do our best addressing them, which hopefully convinces you that our work has enough merits to get accepted. ### On Missing Proofs Note that the proof of Proposition 2 is really only a single line, making use of results in the cited work [22]: **Proof of Proposition 2.** From [22] (Proposition 7), deciding whether a feature is relevant is in $\Sigma_2^P$ assuming deciding (1) is in NP. By Proposition 1, so also is deciding whether $\text{ffa}_\kappa(i, (w,c)) > w$. ### On Weighted FFA We added the weighted variant because it seems more *"natural"* and *"fair"* to treat features in short explanations as having more weight. But in practice since the lengths of AXp's are tightly clustered around the mean, the unweighted and weighted variants are almost indistinguishable. Since the unweighted version is easier to explain/understand, we concentrate on this in the paper. However, if we omitted the discussion of the weighted version entirely, we believe it would be an obvious question for readers, hence we think it is worth including. ### On FFA Approximation Guarantees First, FFA approximation has one strong property (which we should have put in the original paper): Any feature given non-zero importance by the approximation, is guaranteed to have non-zero (true/exact) importance. Note that this is not the case for LIME and SHAP. Second, we can argue about the worst-case approximations convergence to the true FFA. Suppose we have discovered $n$ out of $N$ total AXp's. The FFA approximation for feature $f$ is given by $\hat{p}_f = \frac{c_f}{n}$ where $c_f$ is the count of the number of AXp's found so far including feature $f$. We have that $$ \frac{c_f}{N} \leq \hat{p}_f = \frac{c_f}{n} \leq \frac{c_f + (N-n)}{N} $$ The left holds since $n \leq N$ and the right holds since $c_f \leq n$. Notice that the true proportion $p_f = \frac{c_f + c'_f}{N}$ also lies between these bounds, since $c'_f$, the count of feature $f$ in the *remaining* unseen AXp's, is between 0 and $N - n$. Hence the error $|p_f - \hat{p}_f|$ is bounded by the max distance from $p_f$ to the upper or lower bound. As the number of trials increase, this max error decreases. For example, if we have determined half the AXp's, the difference between the estimated and the true value is at most half in the worst-case scenario. Of course in practice the error is much lower. This is because in practice the AXp's generated to support a single CXp must necessarily be *diverse*, which means the error reduces rapidly. Getting a better theoretical bound for this behaviour remains (challenging) future work. ### On Experimental Results Since LIME and SHAP have different ideas of what feature importance means, they clearly are not aimed at computing FFA. We compare with them to show that these definitions are not the same in practice as FFA. In this regard, please also see the general comments above. Finally, we normalize results to avoid any differences *simply arising from scaling*. --- Rebuttal Comment 1.1: Comment: The response says "Note that the proof of Proposition 2 is really only a single line, making use of results in the cited work [22]:", and then: "Proof of Proposition 2. From [22] (Proposition 7), deciding whether a feature is relevant is in $\Sigma_2^p$ assuming deciding (1) is in NP. By Proposition 1, so also is deciding whether the ffa score is greater-equal than a threshold w". Sorry but this proof strikes as fully wrong, unless I'm missing something important. The fact that deciding feature relevancy is in $\Sigma_2^p$ implies that deciding whether the score is 0 or greater than 0 is in $\Sigma_2^p$, but this implies nothing about whether the problem of deciding whether is greater-equal than, say, $0.1$ is in $\Sigma_2^p$. For instance, if we define the score of a CNF formula as the number of satisfying assignments / total assignments, then clearly deciding "score > 0" is in NP, but deciding whether it's " >= w" for an input "w" would require counting, and unless #P is contained in the polynomial hierarchy, we don't expect the "score >= w" problem to be. Again, I could be misunderstanding something, in which case I'd apologize for the confusion, but I still don't see at all how the problem is in $\Sigma_2^p$. Moreover, the statement of Proposition 1 mentions $w > 0$, which shouldn't be a big issue as the input in which one sets $w = 2^{-m}$ is enough to prove hardness, but I'd have appreciated if both Propositions 1 and 2 were to make explicit that they're considering $w$ as part of the input, and not that there is a fixed $w$ for which the hardness occurs. --- Reply to Comment 1.1.1: Title: Ambiguity in Proposition 2 Comment: Thank you for the valuable comment. Proposition 1 holds for any fixed value of $\omega>0$. Regarding Proposition 2, the confusion comes from the ambiguity regarding the formulation and quantification of $\omega$, which we apologise for. Proposition 2 should have been stated as follows: "Deciding whether there exists $\omega\in(0,1]$ such that $\text{ffa}(i, (\mathbf{v}, c)) \geq \omega$ is in $\Sigma_2^P$ if deciding (1) is in NP". This essentially means answering whether a given feature's FFA is positive. This holds because we can answer "yes, such an $\omega$ exists" if the feature is relevant; and answer "no such constant $\omega$ exists" if the feature is irrelevant. We will restate the proposition in the final version by removing $\omega$ and instead asking whether $\text{ffa}(i, (\mathbf{v}, c))>0$. Again, thank you!
Summary: This paper introduces a novel approach to XAI called Formal Feature Attribution. The authors address the limitations of existing model-agnostic methods and formal XAI approaches by proposing FFA as a solution for feature attribution. The FFA method leverages formal explanation enumeration to define feature attribution as the proportion of explanations in which a specific feature occurs. The paper highlights the challenges in computing exact FFA but presents an efficient approximation technique using a dual trait of such explanations. Strengths: The paper exhibits a high level of clarity and coherence in its writing style, making it easily understandable for readers. The claims and contributions are clearly stated, enabling the reader to grasp the main objectives of the research. The paper makes a noteworthy contribution to the field of Explainable AI by introducing a novel approach. This fills a gap in the existing XAI landscape, where formal foundations are relatively loose, and offers a promising solution for feature attribution. The authors demonstrate extensive work in various aspects of the research. They have invested effort in both the theoretical aspects, establishing formal foundations for FFA, as well as in the practical aspects, conducting experiments to validate the proposed approach. This comprehensive approach enhances the credibility and reliability of the findings. Weaknesses: Axiomatic Analysis of FFA: While Lime and Shap have established a set of axioms for explanations, it remains unclear which set of axioms the FFA explanations adhere to. It would be beneficial to explore and define the axioms underlying FFA explanations or engage in theoretical debates surrounding them. For example, I think that duplicate features increase the importance of other features that exists in a common AX'p. Using a specific example, think of a scenario where two correlated features, f1 and f2, are considered. In this case, a decision tree is constructed where f1 appears at a certain node while f2 is absent. Now, create a separate decision tree that includes the condition for either f1 or f2 to appear at that node, resulting in identical predictions due to their correlation. However, despite the consistent predictions, I'm pretty sure that the two models provide different explanations for the features, questioning whether the ratio of feature occurrence in formal explanations increases equally in the numerator and denominator. Approximation Guarantees for FFA: Since FFA is an NP problem, it necessitates further consideration regarding the guarantees provided by approximation techniques. Investigating the quality and limitations of these approximations would strengthen the practical utility of FFA. The absence of code implementation limits the reproducibility and practical adoption of the proposed FFA approach. Addressing these aspects would further enhance the theoretical and practical implications of FFA within the field of Explainable AI. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Isn't the approach of finding a minimal subset of features for which the classification remains the same too strict? Could a probabilistic approach potentially be more suitable, enabling the smoothing of results and accommodating variations in feature importance? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Distinguishing Out-of-Distribution and Manifold Sampling: The paper does not explicitly discuss the distinctions between out-of-distribution sampling and manifold sampling, despite their potential significance in this context. Exploring the differences and implications of these sampling techniques would contribute to a more comprehensive understanding of FFA. Addressing Local Explanations: The paper briefly touches on the issue of local explanations, but does not delve into their meanings, differences, or implications. Future research could focus on formulating FFA specifically for global explanations, shedding light on the disparities between local and global explanations within the context of FFA. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive view on our work! Please find our response in regard to the weaknesses and limitations you identified in it. ### On Axiomatic Analysis and Approximation Guarantees We agree that some initial axiomatic analysis would be nice to have for FFA and we will add a couple of comments on this in the final version of the paper. For example, we can surely claim that a feature irrelevant for a given prediction made by a given model will have a zero FFA. Note that this applies both to the exact FFA but also to approximate FFA, e.g. taken after a time limit is reached. This is a strong result, which does not apply to LIME and SHAP. *Please also see a larger comment on FFA approximation guarantees given to reviewer CksZ.* ### On Correlated Features In your example with two decision trees with correlated features $f_1$ and $f_2$, the values of FFA will depend on what formal abductive explanations are for the two trees. If both trees compute the same classification function then the sets of all AXp's will be identical, which in turn means the values of FFA will be the same for all the features. Otherwise, if the trees have different sets of AXp's, the values of FFA for $f_1$ and $f_2$ may indeed differ. We should note that there is no issue with this per se because an explainer is meant to provide reasons for the behavior of a given classifier in a particular situation rather than explain the ground truth. Therefore, the values of FFA will be correct for both decision trees as they reflect the behavior of these concrete classifiers. Note that we can then use these values to compare the classifiers and see which of them is more reasonable to apply. Having said that, we agree that the feature correlation is a potentially crucial problem that occurs for many explanation approaches. From the perspective of formal explanations, one can integrate ground truth constraints detecting feature correlations when enumerating AXp's [20, *], which will result in more reliable FFA values. [*] Jinqiang Yu, Alexey Ignatiev, Peter J. Stuckey, Nina Narodytska, João Marques-Silva: Eliminating the Impossible, Whatever Remains Must Be True: On Extracting and Applying Background Knowledge in the Context of Formal Explanations. AAAI 2023: 4123-4131 ### On Missing Implementation We are sorry to see that you could not find the source code of the implementation. Please notice that the source code of the implementation as well as the complete experimental setup *were submitted* with the paper as "supplementary material". ### On Extracting Subset-Minimal AXp's You are right that extracting subset-minimal AXp's is computationally quite challenging and one could try to accommodate the use of probabilistic approaches here. We will consider this in the future and will comment on this in the final version. ### Out-of-Distribution vs Manifold Sampling Note that as formal explanation approaches (including ours) do not rely on sampling, they are not susceptible to *any* sampling issues. Having said that, we will surely consider other sampling-based explainers in our future work, to see how their feature attribution correlates with FFA and how sampling may affect the quality of the result explanations. ### On Local vs Global FFA Thank you for the comment! We agree this would be a very interesting line of future work comparing the pros and cons of local vs global FFA and the computational challenges arising in both cases. --- Rebuttal Comment 1.1: Comment: I appreciate your acknowledgment of the clarification regarding the method's independence from sampling. Notably, this aspect has been consistently commented on across all the reviews. I believe it would be beneficial to provide an extended explanation within the paper to address this point more comprehensively. Regarding correlative features, comparing models using scores is possible, but what exactly does this score difference means? Additional elaboration on this aspect [see ref attached] would be valuable. Chen, Hugh, et al. "True to the model or true to the data?." arXiv preprint arXiv:2006.16234 (2020). --- Reply to Comment 1.1.1: Title: On Additional Comments of Reviewer FoWS Comment: Indeed, given the reviewers' confusion, we will add a clarification on the independence of our method from sampling in the final version of the paper by using the comments provided in the rebuttal and sacrificing some of the experimental details in the paper. Thank you for the suggestion. On what the score comparison shows, we should once again note that although attribution scores computed by LIME and SHAP certainly do not target to replicate FFA, they are still claimed to compute some attribution scores and the score difference we observe demonstrates that the attribution reported by these explainers is clearly far away from FFA, a novel, simple, formal, and easy-to-understand attribution measure. The same observations can be made wrt. FastSHAP and KernelSHAP additionally tested upon one of the reviewers' request. We believe these observations should motivate the community to think over what these explainers actually compute in practice. As for correlative features, thank you for the pointer. We will cite this work. Our method aims at explaining the behavior of the model and so the explanations we compute are "true to the model". Also note that our method surely avoids assigning non-zero attribution to the features unused by the model, i.e. it respects the Dummy Axiom. As for the "interventinal conditional expectation" studied in this paper and the use of LIME/SHAP/FastSHAP/KernelSHAP, note that although our running example model *does use* feature "Relationship", it still *has nothing* to do with the prediction for the concrete instance discussed in the paper (because this feature does not belong to any AXp) and so its attribution should be zero while LIME, SHAP, and KernelSHAP claim it is not. It might be the case that the issue pertains to the exact Shapley value for this feature in the instance too (although we have not calculated it). Testing this would be interesting in the future (to confirm whether the findings of [21] hold here).
Summary: The authors propose formal feature attributions, a novel type of local feature attribution method for explaining the predictions of black box models. Their approach builds on the notion of abductive explanations (AXp's), a type of minimal sufficient subsets. One issue with AXps is that there are a potentially exponential number of them, most of which fall outside of the data distribution. The authors propose - essentially - to summarize the set of AXps via averaging into a per-feature relevance score akin to those provided by LIME and SHAP. An inverse proposeity weighted variant is also introduced. The authors then show that typically computing FFAs is computationally intractable and propose an approximation algorithm for quickly estimating approximate FFAs. This approximation is shown to work well on two MNIST-like data sets. **Post-rebuttal update**: increased score, see the discussion. Strengths: + Very clearly written, a pleasure to read. + Ideas are clearly presented, with a couple of exceptions (see questions below). + Related work is well done. + FFAs are rooted on a simple and clear concept. + Algorithm is sensible. + Good empirical performance on a relatively varied set of data sets (for boosted DTs only). + Some essential limitations are clearly discussed, but see below. Weaknesses: - The authors seem to assume FFAs are the "real feature attribution", but provide no real motivation for this (see Q1, this is the big one). - Unclear reasons why OOD sampling is an issue and how FFAs deal with it (Q2). - Missing discussion of information loss due to averaging over AXp's (Q3). - Experiments consider boosted DTs only (relatively minor). - Missing evaluation of approximate FFA algorithm on non-MNIST data (minor). Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q1. I am puzzled by the statement in line 144 that "LIME and SHAP often fail to grasp the real feature attribution in a number of practical scenarios", and by the construction of the first experiment. Here the authors compare LIME and SHAP against exact FFA relevance values, using the term "error" to indicate the difference between relevances output by the former and the latter. It seems to me the authors automatically assume that Eq. 3 is the "real feature attribution", but it is not clear why this would be the case. LIME, SHAP, and FFA measure *different things* with *different semantics* and *different properties*. I'd like to understand why FFAs (which measure the chance that a certain feature occurs in the set of AXp's) are taken to be - essentially - *the* gold standard. I could not find solid motivation for this in the text. The authors write that FFAs are "[...] the percentage of (formal abductive) explanations that make use of a particular feature i", which I do agree with. What I miss is a link between this and believing that FFAs are "better" in all respects than LIME or SHAP (two approaches that, honestly, I am not a fan of). Q2. The authors stress the issue of "out-of-distribution sampling". They mention this is an issue for AXp's because it yields complex explanations, hindering human understanding. They also mention it is an issue for other, less "formal" attribution techniques like LIME and SHAP, but they don't explain why it is a problem. My take is whathever the impact is on LIME and SHAP, it is not related to explanation complexity or length. Then the authors explain that FFAs deal with explanation complexity, which I do agree with as FFAs compress the full set of AXp's into a fixed-size relevance vector. However, I do not see how FFAs deal with the *other*, unnamed problem affecting LIME and SHAP. TL;DR: what's the issue with OOD sampling and LIME and SHAP, and why wouldn't the same issue affect AXp's and/or FFAs? Q3. My understanding is that each AXp is strongly determined by feature interactions. Taking the average over AXp's entirely throws away this information. I think this is an important limitation - not only of FFAs, but rather of all feature attribution techniques, chiefly due to their simple vector format - that should be openly discussed in the paper. I am confident the authors are keenly aware of this issue, and I'd appreciate if they made the readers aware of it too. To turn this into a question: would you agree that FFAs are by their own nature less expressive than (the set of) AXp's? I am more than willing to increase my scores once the authors address these questions. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The paper has an explicit limitations section which is quite well done. I have outlined a couple of other possible limitations in my questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive comments. As the weaknesses you identified directly correlate with the questions asked, let us address them by answering the questions down below: ### Answers to Questions **Q1.** Please see one of the general comments on the use of FFA and the adequacy of its comparison to LIME and SHAP above. **Q2.** We would like to clarify that out-of-distribution sampling represents an issue for sampling-based explainers but not for formal explainers. Namely, out-of-distribution sampling appears if instance perturbation performed by an explainer ignores the actual data distribution and so the explainer creates erroneous instances and so, as a result, faulty conclusions. We do not elaborate on this issue in the paper because it is quite known in the XAI literature since at least [62]. AXp's are not susceptible to OOD, because the computation of an AXp is done by means of formally reasoning about the logical representation of the model and so formally proving that a subset of features is an AXp, using equation (1) and considering the entire feature space. No sampling is used here. Essentially, we prove that there is *no single* counterexample instance in the feature space that would invalidate our AXp. One the one hand, this is a very strong guarantee that makes AXp's "bullet-proof". On the other hand, this also makes AXp's quite bulky as we need to account for all possible combinations of feature values (and hence keep a large number of features in). This issue is one of the motivations for FFA. **Q3.** We agree with you that feature attribution approaches in general lack the information on feature interactions that may be available in feature selection explanations. Therefore, we do believe numeric values of FFA may be practically less informative than the set of AXp's used to generate these FFA values. Having said that, we should note that in stark contrast to the existing feature attribution approaches, the FFA computation and approximation method does not throw the evidence away - the AXp's can be kept and provided to a user on demand if additional evidence or insights turn out to be necessary. This is another advantage of FFA over the existing feature attribution approaches, which will surely comment on in the final version of the paper. Thank you! ### On Experiments with Boosted Trees Formal explanation enumeration, which our approach makes use of, requires one to represent the model of interest in a suitable logical formalism and then formally reason about its behaviour by means of making a series of reasoning oracle calls (note that this is also mentioned in the section on limitations). While for some kinds of models formal explanation extraction is computationally trivial to do, e.g. for decision trees or monotone classifiers, the other extreme is represented by various kinds of neural networks where formal explanation extraction is computationally quite challenging due to the lack of effective reasoners that could tackle the entire feature space with sufficient ease. As a result, we opt for a compromise between the power and generalizability of a model on the one side and computational tractability of formal explanation extraction and enumeration on the other side provided by tree ensembles. Note that as better formal reasoning methods get created for more kinds of ML models, FFA becomes more practically computable. ### On MNIST-Related Data Also, although we do evaluate approximate FFA computation only on MNIST-related data, it is important to observe that exact FFA is shown to be within the reach of our methods for numerous tabular data widely studied in the XAI domain. --- Rebuttal Comment 1.1: Title: Regarding the authors' response to Q3 Comment: Also in SHAP, it's possible to retain the intermediate outcomes, which can provide insights into interactions. If I grasp correctly, taking an average might obscure interaction effects, which could be crucial information. Moreover, AXp alone might not inherently unveil these effects; instead, they might require additional investigation based on the collected AXp data (which again, as I said, holds for SHAP as well). If your method claims to offer an advantage in identifying significant interactions, it would be beneficial to provide a more detailed explanation of how exactly it achieves this. --- Reply to Comment 1.1.1: Title: Regarding the reviewer's FoWS additional comment on Q3 Comment: Thank you for the additional comment. We would like to point out that given a set of AXp’s collected, there are multiple ways to use them. For example, we can compute exact or approximate FFA. If a user needs evidence for the FFA values reported, some (or all) AXp's may be given to the user. Also, we can determine positive or negative feature correlation based on AXp's. We can also use CXp’s computed in the enumeration process to provide additional insights on features that contribute to changing the prediction. Overall, there is certainly much more to explore here and we believe FFA is a starting point for understanding the relationship of features to the prediction, which at least agrees with feature relevancy.
Summary: This work proposes a new approach called formal feature attribution (FFA), inspired by successful FXAI methods, to compute feature attribution scores. FFA is defined as the proportion of explanations where a feature occurs. Experiments try to demonstrate the effectiveness of FFA compared to SHAP and LIME under several public datasets. Strengths: 1. The authors proposed a new perspective on providing feature attributions. They did point out some limitations of existing popular methods. 2. The proposed FFA is straightforward and with clear motivations. Weaknesses: 1. Important questions regarding the reasonableness of the proposed method are unanswered. For instance, the three advantage properties of FFA claimed in this paper are not well reasoned. 2. The experiments provided in this work are hard to support FFA is better than other existing XAI scoring methods. 3. Several definitions and annotations throughout the text lack proper illustration or clarification, which can hinder the reader's understanding. Without clear explanations, it becomes challenging to grasp the intended meanings and implications of these terms and annotations. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. The authors claim that FFA gains three nice properties. However, the descriptions of those properties are too vague to understand. The authors are highly encouraged to explain more details about those points. Some example concerns are illustrated as follows. The first advantage is that FFA has a strict and formal definition. However, the author fails to explain why the Shapley Value lacks a strict and formal definition, despite it being commonly regarded as the ground truth value in feature attributions. I am unable to discern the distinction between the Shapley Value and FFA in terms of a strict and formal definition, as well as why the Shapley Value cannot provide formal feature attributions. Additionally, claiming that percentage-wise feature attributions are more user-friendly without conducting user experiment studies or related experiments is unsubstantiated. Insufficient case studies on image data are inadequate to provide support. 2. There are a lot of advancements discussing the issues of out-of-distribution sampling. It would be better to see the comparison between these series of methods, making the proposed approach more convincing, as FFA is claimed to address the limitations of out-of-distribution sampling (see line 140 to 148). 3. LIME and SHAP are two classical baselines to provide a model explanation but obtain inferior performance on image datasets. It is highly encouraged the authors compare with some SOTA Shapley-based methods on image data, such as DeepSHAP[1], FastSHAP[2], or CoRTX[3]. 4. As for the experimental results in Figure 3, it is very limited for authors to use only one example to reveal that FFA is better than LIME and SHAP. Furthermore, Table 1 only demonstrates LIME and SHAP fail to get close enough to FFA, which is expected as neither of them is designed to approximate FFA. However, the conclusion in Table 1 does not mean that FFA is a better indicator than LIME, SHAP, or Shapley Values in providing a formal explanation. I did not see any direct evidence to support this point. [1] Delivering Trustworthy AI through Formal XAI. Marques-Silva et al. AAAI 2022. [2] A unified approach to interpreting model predictions. Lundberg et al. NeurIPS 2017. [3] FastSHAP: Real-Time Shapley Value Estimation. Neil et al. ICLR 2022. [4] CoRTX: Contrastive Framework for Real-time Explanation. Chuang et al. ICLR 2023. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: 1. The proposed FFA only focuses on the classification task, which makes it limited to be applied in several other common tasks, such as the regression task. 2. The paper lacks clear illustrations of experiment settings and does not compare with relevant advancements, leading to unconvincing results for verifying the proposed hypothesis. More detailed descriptions of the experiment settings and comparative analyses with related advancements are necessary to strengthen the study's validity. 3. Several annotations and definitions are not clearly illustrated. For example, the “right arrow” in Equation 1 and the definition of “formal explanation” is not well-illustrated in this work. This makes the work hard to follow. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the comments. We try to address them below, which will hopefully convince you that our work has merits justifying acceptance. ### Answers to Questions **Q1.** We would like to clarify that by no means we claim that Shapley values have no formal definition. On the contrary, as Shapley values themselves originate from the work on cooperative games, they are formally well-defined in the original context. Having said that, we build on the evidence provided in recent work [21] that *(even exact)* Shapley values may assign non-zero feature attribution to features unrelated to the prediction and, the other way around, may assign zero attributions to features that play a role in the prediction process. We are not saying that we should stop using approximate Shapley values but the issues revealed in [21] are quite concerning and they motivate plausible alternatives to Shapley values, which FFA is an example of. **Q2.** Note that FFA is not claimed to target specifically the problem of out-of-distribution sampling. We mention OOD sampling as an *example issue* that most of the perturbation-based approaches are susceptible to and that *none* of the formal approaches exhibits. **Q3.** LIME and SHAP are examples of extremely successful explainers widely used in practice and *extensively cited* in the XAI literature. To our best knowledge, DeepSHAP does not support tree ensemble models and so we cannot compare against it. As for FastSHAP [a], we ran additional experiments with it, KernelSHAP [b] and its acceleration (denoted as KernalSHAP-S in the attached PDF) that uses paired sampling [c], and the results can be found in the attached PDF. Observe that these explainers perform similarly to LIME and SHAP, which is not surprising as they approximate exact Shapley values, which are known to be not that closely related to feature importance [21]. We were unable to complete experiments using CoRTX in time, but we do not expect the results to be different since it again computes approximate Shapley values, not explanation attributions. Also, note that CoRTX was published in May 2023, i.e. around the time of NeurIPS submission. **Q4.** Please see our general rebuttal above on as to why we believe FFA is a good metric for feature attribution while prior work [21] revealed the issues pertaining to Shapley values in the explainability context (also see the answer to Q1). ### On Inconvincing Experiments Note that we use multiple datasets in our experiments demonstrating how much LIME and SHAP disagree with FFA. Figure 3 only serves as a single example taken for the Compas dataset, which is widely used in the XAI literature, while Table 1 provides average information across hundreds of instances explained for a number of datasets (including Compas). We can easily show many more examples but the page limit prevents us from doing so. ### On Classification Focus We would like to note that all the modern formal XAI (FXAI) approaches, without exception, aim at tackling classification explainability. Hence, this should not be treated as a limitation of our work - this is simply beyond the scope of FXAI in general (and our paper too) and it requires a large body of currently missing work in the area. Nevertheless, if we want to explain why a regression value sits within given lower and upper bounds then we can apply the same technology, with no changes, assuming we can logically represent the regressor, since the question has been converted to a true/false question. ### On Missing Details and Unverified Hypothesis We would like to kindly note that the experimental setup is described in the paper and the results are discussed within the limits of the page limit. Moreover, the entire experiment can be reproduced using the source code, the benchmarks and all the scripts provided in the supplementary material, which was submitted with the paper. ### On Presentation Issues We are happy to improve the presentation further if it helps a reader understand the ideas. The "right arrow" is the implication operator representing logical entailment widely used in mathematics, computer science, and other sciences. Namely, whatever is written to the left of the arrow logically entails whatever is written to the right of the arrow. Note that formula (1) is augmented with a short text description that summarizes the meaning of the formula. - [a] FastSHAP: Real-Time Shapley Value Estimation. Neil et al. ICLR 2022. - [b] A unified approach to interpreting model predictions. Lundberg et al. NeurIPS 2017. - [c] Improving KernelSHAP: Practical Shapley value estimation using linear regression. Covert, I. and Lee, S.-I. PMLR 2021. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer aL7B Comment: Thank you for the hard work and effort you put in a very short time. It is very impressive. I do appreciate the clarification and experimental results provided by the authors. I now better understand the contributions of this work, and most of my concerns are addressed. The reason that I think FFA is trying to solve the OOD problem comes from Line 145, where the authors mention the goal of the paper is to solve the limitations mentioned in the previous paragraphs. And the previous paragraphs discuss the limitation of the OOD problem. This is confusing for readers to realize the contribution and motivation of FFA. Furthermore, FFA only applies to the classification task, which may limit its usage in more real-world applications. I think this is still a limitation in this paper, but a worthwhile future direction to solve. As a pioneering paper to discuss FFA, I will not consider this point as a drawback during the reviewing session. I have three more follow-up questions after rereading the paper again. I am willing to recommend acceptance if the follow-up questions are solved. **(1)** FFA is only tested on some naive and small datasets in classification tasks, such as the 28x28 size of MNIST and 28x28 PneumoniaMNIST. The size of the image data here is far away from the size in the real-world dataset. I would like to know why the authors chose not to evaluate under other image datasets, such as CIFAR-10 and ImageNet, with larger images. Maybe the high computational complexity of FFA prevents you from doing so? or are there any other reasons? **(2)** The experimental results presented in this study are assessed using the exact FFA value (please correct me if my understanding is inaccurate). Upon reviewing the outcomes of KernelSHAP, FastSHAP, and KernelSHAP-S (both Shapley-based methods), I believe that KernelSHAP and FastSHAP are expected to demonstrate inferior performance, as their primary focus is on estimating Shapley values rather than specifically targeting FFA estimation. On the other hand, ${FFA}_{numbers}$ are designed to estimate FFA, which is expectedly to have better performance with lower estimation error. I agree with Reviewer FoWS, and it would be better if this work could include the axiomatic analysis of FFA, as the experimental results are hard to convince. I would like to ask the authors to provide more insights or observations regarding this question. **(3)** What is the difference between SHAP and KernelSHAP? To the best of my knowledge, these two are the same baseline, so why do they obtain different performances? --- Reply to Comment 1.1.1: Title: On Three Additional Questions from Reviewer aL7B Comment: We are happy the clarification helped! We will rephrase the sentence on OOD (line 145) in the final version of the paper so that it does not confuse our readers. Thank you! As for your 3 additional questions, let us reply below: #### **1. On Using MNIST** It is true that we tested FFA on relatively small datasets but, to the best of our knowledge, those are quite widely used in the XAI literature (this applies to both tabular and image data we used). Thank you for pointing out these two image datasets. To be honest with you, we overlooked the possibility to test FFA approximation on CIFAR-10 while we could do it - nothing prevents us from testing it except for time. While we do not anticipate the same level of efficiency as with MNIST due to the (slightly) larger size of CIFAR-10, we acknowledge the potential significance of including them. We will consider larger image datasets, such as CIFAR-10, in future work. Having said that, ImageNet should definitely be out of reach for our method. This is because FFA approximation requires enumerating abductive explanations, which in turn requires efficient formal reasoning about a model's behavior. Given the size of ImageNet, the models to use here may be too large to handle by formal reasoning. It would be interesting to see what models could be effectively trained on ImageNet and what accuracy they might have, to check if any of them were within the reach of modern formal methods. #### **2. On SHAP's Behavior vs. FFA Approximation** Your understanding is correct that a large portion of the original experimental results and all the additional results are obtained for the cases, where we can efficiently compute *exact FFA*. Although we surely agree with you and the other reviewers that SHAP-like explainers *are not designed* to approximate FFA, it seems valid to compare the values they report with those of FFA. Our findings show that feature attributions calculated by LIME and multiple versions of SHAP are far away from FFA. While we well understand the meaning of FFA, a very simple metric, we believe these observations should motivate the community to think over what these explainers actually compute in practice and how to treat them from different perspectives, e.g. from the view of formal reasoning. As for an axiomatic analysis, we agree this would be interesting to investigate. Please also see our response to Reviewer Cksz, where some initial observations on this are provided. Having said that, we believe our work can be seen as a starting point in this direction, motivating further effort in the near future. #### **3. On Difference Between SHAP, KernelSHAP, and KernelSHAP-S** - "KernelSHAP" used in the additional experiments provided upon your request denotes one of the original versions of SHAP presented in [40]. - The version of "SHAP" used in our paper is an *improved* version published by the same authors in [d], which refers to it as TreeSHAP. We used it because it is designed to better handle tree-based models including random forests and tree ensembles like XGBoost and LightGBM, and other. To our best knowledge, compared to KernelSHAP, it directly uses the structure of the tree models when sampling and, therefore, it is claimed to be advantageous to KernelSHAP (recall that KernelSHAP is model-agnostic) in two aspects: - In TreeSHAP, instead of iterating through all possible feature combinations (or a subset thereof), each combination is processed concurrently within the tree. It uses a more complex algorithm to monitor the results of each combination and the overall complexity is reduced. Therefore, TreeSHAP eliminates all the sampling-based estimation variance and it is not required to use a background dataset or select a subset of feature combinations. - The Shapley values computed by TreeSHAP are not skewed due to feature dependencies, as these dependencies are contained within the tree structure. - "KernelSHAP-S" used in the additional experiments is an accelerated version of "KernelSHAP", which applies paired sampling [c] (the reference can be found in our earlier reply). [d] From local explanations to global understanding with explainable AI for trees. Lundberg et al. Nature machine intelligence, 2020. As a result, the approximations of Shapley values reported by these three versions of SHAP and different and their performance also differs.
Rebuttal 1: Rebuttal: We thank the reviewers for the thorough and helpful comments. ### Why FFA? Several reviewers raise concerns regarding the use of FFA as a "gold standard" in feature attribution and also regarding the validity of its comparison with other feature attribution measures. Hence, we would like to give a few general comments on this. The key insight that our definition of FFA builds on is that formal abductive explanations are (*provably* guaranteed) reasons for the predictions made by a given model. Essentially, if a subset of features is claimed to be an AXp for a prediction made by our model then we can be certain that assigning these features to the values dictated by the instance will *necessarily* lead to the same prediction. Thanks to the subset-minimality of AXp's, we can also be certain that nothing can be removed from an AXp, i.e. none of its proper subsets is an AXp. Note that this is not estimated statistically but it is rather *proved formally* based on the logical representation of the model of interest, which makes the concept very strong. Given this, complete enumeration of all possible AXp's for a model's prediction allows us to explore all the subsets of feature-values that logically entail this concrete prediction. We emphasize that upon completion of AXp enumeration, *no other logical reasons exist* for this concrete model's prediction. This enables us to investigate feature relevancy, i.e. a feature is deemed relevant for a given prediction if it belongs to at least one AXp for that prediction; and vice versa, if it does not then it is irrelevant for the prediction. This also enables us to estimate how important a feature is, i.e. how frequently it appears in explanations across *all* the AXp's for this prediction. In this regard, let's consider two types of attribution errors: TYPE1 where an attribution method says the feature is relevant while it is not, and TYPE2 where the method says that the feature is irrelevant while it is relevant. Importantly, our method never makes TYPE1 errors, even when approximating FFA; this is not the case for the competitors. Consider our example in the paper (see Examples 1-3 and Figures 1-2). Given an instance to explain: *{"Education"="Bachelors", "Status"="Separated", "Occupation"="Sales", "Relationship"="Not-in-family", "Sex"="Male", "Hours/w"<=40}*, we enumerate all AXp's - there are two of them: {"Education", "Hours/w"} and {"Education", "Status"}. As long as these features are set to the values of the instance, we are sure the model will predict "< 50k" no matter what other features are assigned to. Based on this, we can say that feature "Education" is the most important as it appears in all AXp's while "Hours/w" and "Status" have importance of 0.5 as each of them appears in a half of the AXp's. All the other features are irrelevant for this prediction. To illustrate this further, consider the following decision set of 12 irreducible rules, which is **equivalent** to the tree ensemble shown in the paper: ``` 01: IF 'Status == Never-Married' AND '40 < Hours/w <= 45' THEN '< $50k' 02: IF 'Status == Never-Married' AND 'Relationship != Not-in-family' THEN '< $50k' 03: IF 'Education != Doctorate' AND 'Status != Married' THEN '< $50k' 04: IF 'Education != Doctorate' AND 'Hours/w <= 40' THEN '< $50k' 05: IF 'Education != Doctorate' AND 'Relationship == Own-child' THEN '< $50k' 06: IF 'Education == Dropout' THEN '< $50k' 07: IF 'Status != Married' AND 'Relationship != Not-in-family' AND 'Hours/w <= 45' THEN '< $50k' 08: IF 'Education == Doctorate' AND 'Relationship == Not-in-family' AND 'NOT 40 < Hours/w <= 45' THEN '>= $50k' 09: IF 'Education == Doctorate' AND 'Status != Never-Married' AND 'Relationship == Not-in-family' THEN '>= $50k' 10: IF 'Education != Dropout' AND 'Status == Married' AND 'Relationship != Own-child' AND 'Hours/w > 40' THEN '>= $50k' 11: IF 'Education == Doctorate' AND 'Status == Married' THEN '>= $50k' 12: IF 'Education == Doctorate' AND 'Status != Never-Married' AND 'Hours/w > 45' THEN '>= $50k' ``` We emphasize that this DS replicates the behavior of the original BT model in the *entire* feature space. The only two rules applicable to the considered instance are `03` and `04`. In fact, they determine the two AXp's shown above, which we believe provides very clear grounds for defining FFA. Observe that comparing FFA with the explanations of LIME and SHAP is adequate because they also claim to measure feature importance for a given model's prediction, based on statistical observations during extensive sampling in the instance's vicinity. Clearly, statistical correlation does not represent a *causal premise* for the prediction and in practice it may often lead to attributions having little to do with reality. This is confirmed by our example (and experimental results *in general*): observe how both LIME and SHAP claim non-zero importance of feature "Relationship" even though it does not appear in any AXp, a TYPE1 error. Importantly, this feature has nothing to do with the prediction for our concrete instance, according to the equivalent (and simple to understand) DS model, which is at least puzzling. Similarly, both LIME and SHAP fail to appreciate the importance of feature "Education" (TYPE2 error). ### Additional Experimental Results As Reviewer aL7B requested, we ran additional experiments with FastSHAP, KernelSHAP and KernelSHAP-S on the benchmarks used in the paper. The use of SHAP variants results in feature attributions roughly similar to those of LIME and SHAP. It is not surprising given that the nature of these explainers is similar to that of SHAP, i.e. they try to approximate Shapley values, which are not that related to feature importance [21]. Apart from FastSHAP, each of these variants makes TYPE1 error on the the above example, ascribing non-zero importance to "Relationship". FastSHAP makes the same TYPE2 error as SHAP. Pdf: /pdf/a0ed5842b4dd96acea108609bf365621b01e3b25.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Volume Feature Rendering for Fast Neural Radiance Field Reconstruction
Accept (poster)
Summary: This work improves grid-based NeRF on both quality and training speed. To predict view-dependent color, they proposes to condition MLP on the volume rendered voxel features instead of each original point features. As a result, the MLP only run once for each pixel instead of the original dozen of MLP evaluations. To improve robustness, they find using the old pipeline (called pilot network) to warmup the scene for few hundreds iteration is enough. A new spherical-harmonic-based view-direction feature encoding is proposed, which combined with larger MLP achieve state-of-the-art quality. Experiments results show the superior quality and training speed from previous arts. Strengths: The improvement is really solid and promising. The proposed method is not hard to adapt to other method. The paper is easy to follow as well. I believe this work can benefit many future researches. Weaknesses: It's a pity that videos are not provided to showcase the improvement on view-dependent effect. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. **Eq.4.** Is the final view-direction feature summed toghether ($\sum_{l,m} e_l^m$) like what we conventionally do for rgb or concatenated ($\mathrm{concant}(\{e_l^m | \forall l,m\})$)? 2. **Table 3.** It's better to put a table footnote that some baselines' training times are rescaled by 8 (the number of GPUs they used). As discussed in L216-219, this is not rigorous so I think it's better to inform readers who only skims the table. 3. **Table 4's H row.** It seems that the 56 should have be 256. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: I think the limitation is well discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your encouraging feedback. We respond to your concerns one-by-one as follows. **Weaknesses:** __Videos are not provided:__ Thanks for your suggestion. We will upload the rendered videos to showcase the improvement on view-dependent effect in our final version of this paper. **Questions:** 1. __Eq. 4:__ The encoded view direction feature vectors are concatenated to form one comprehensive view direction encoding. We will provide more detailed description for SH feature encoding to highlight this feature vector concatenation in our final version. 2. __Table 3:__ Thanks for your valuable suggestion. We provide the running time comparison of various methods measured on the same RTX 3090 for fair and rigorous comparison. As shown in the updated Tables 1 and 2, we can draw the same conclusion as in our first submission. 3. __Table 4’s H row:__ Thanks for pointing this out. We will correct this error in our final version. Table 2. PSNR and training time on the NeRF synthetic dataset on one RTX 3090. | | PSNR | Time | ------ | ------ | --- | | NeRF | 31.69 | hours | Mip-NeRF | 33.09 | hours | TensoRF | 33.14 | 10 mins | Instant-NGP | 32.35 | 4.2 mins | NerfAcc | 33.11 | 4.2 mins | Our VFR: 6K | 33.02 | 0.97 mins | Our VFR: 20K | __34.62__ | __3.3 mins__ Table 3. PSNR and training time on the real-world 360 dataset on one RTX 3090. | | PSNR | #Feature | Time | ------ | ------ | --- | --- | | NeRF | 24.85 | N/A | days | Mip-NeRF | 25.12 | N/A |days | NeRF++ | 26.21 | N/A |days | Mip-NeRF360 | 29.11 | N/A |days | Instant-NGP | 27.06 | 84M |0.81 hrs | Zip-NeRF | 29.82 | 84M | 5.2 hrs |NerfAcc | 28.69 | 34M | 11 mins |Our VFR: 20K| 29.48 | __34M__ | __5.7 mins__ |Our VFR: 40K| __29.92__ | __34M__ | __11 mins__ --- Rebuttal Comment 1.1: Comment: I appreciate the author's responses. I don't have any other questions.
Summary: The paper proposes a method for fast NeRF reconstruction. The main contribution of the paper is that instead of integrating radiance along the camera ray which requires evaluation of the color MLP at each point, the paper proposes to integrate features along the ray and only evaluate a single MLP on the integrated features to get the final pixel color. In this way, the proposed method can reduce the number of MLP evaluations, thus allowing the method to use achieve faster reconstruction even with a larger MLP. The larger MLP also allows the method to achieve higher-quality reconstructions. The paper does experiments on NeRF synthetic dataset and real-world 360 dataset and demonstrates higher accuracy and fast reconstruction speed than baseline methods such as instant-nap, nerfacc, and tensorf. Strengths: 1. The paper shows that integrating the features along the ray and decoding them to the pixel color with a large MLP can achieve faster and higher-quality reconstructions. 2. The paper proposes to use a small pilot network at the beginning of the optimization that applies standard volume rendering, which helps stabilize the training of the proposed method. 3. The paper did a thorough evaluation of the proposed method against different baseline methods on both synthetic and real-world datasets. Weaknesses: See Questions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. It's not clear to me how the method integrates the features along the ray. In Equation 3, the paper says it performs a weighted combination of the features. I would assume that the weight is calculated in a similar way to the standard volume rendering based on the density field. Then it's not clear to me how the density field is predicted. The paper is based on the NGP representation, and it in fact uses a MLP to predict the density of each point. Is the paper doing in the same way? If that's the case, I think the paper should make it clear. Also in this case, the paper should not claim that it is performing a single evaluation NN for each pixel (or make it more clear, it's a single evaluation of color MLP). Similarly, in the case of real-world scenes, the paper uses proposal networks for sampling, which also requires additional MLP evaluations. Overall, I think the paper should make it more adequate when it says it's performing a single neural network evaluation. 2. In Line 70, the paper says SNeRG still requires many times of MLP evaluations to get the diffuse color. This is not true. SNeRG stores the diffuse colors at each voxel, and directly applies alpha compositing to get the diffuse color for the ray. Therefore, no MLP evaluation is needed. 3. The motivation behind the pilot network is kind of ad-hoc to me. First, the paper says VFR works well for most scenes but fails to converge for some scenes. If it's a scene-specific problem, It's good to know the characteristics of the scenes where it will fail. In addition, the paper says the reason for VFR not converging is that it does not have a coarse geometry to focus on samples on the surface. The first question is that why VFR needs to focus on samples on the surface. Does it mean that VFR will not work well on objects that don't have an opaque surface (like furry objects)? Why standard volume rendering does not this problem? Moreover, the paper also says larger networks tend to have more convergence issues than smaller networks, how does this relate to the lack of coarse geometry? I think the paper should have a better explanation on the convergence issues with VFR and make a more convincing argument behind the motivation of the pilot network. 4. In Figure 4, the results on the Ficus scene of Nerfacc is suspicious to me. What's the reason for the weird specularity on the pot? 5. I do have concerns over the technical contributions of the paper. As the paper said in Line 68, SNeRG has been using feature integrations for faster NeRF rendering. In addition, previous works such as StyleNeRF, EG3D, and StyleSDF have also been using feature integration for NeRF. While they don't do a systematic evaluation on the per-scene reconstruction task, I think generally feature integration has been a standard technique in the NeRF community, and has been explored in similar tasks as mentioned above. To summarize, while I appreciate the thorough experiments and evaluations of the paper, my major concern over the paper is that the proposed method has been explored in similar tasks in previous works and there is a lack of technical contributions from the paper. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 1 poor Limitations: The limitations look good to me. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your insightful comments for our manuscript. We respond to your concerns one-by-one as follows. 1. __How the method integrates the features along the ray:__ We apologize for the missed details of how to obtain the integration weights. The weights are calculated from densities of sampled points, where density of a point is predicted by a linear layer from the queried feature vector before integration. As this linear layer only has one output neuron to predict the density value, it has a negligible effect on the overall training time. On the real-world 360 scenes, the proposal networks do require additional MLP evaluations for density prediction as mentioned by the reviewer. However, these MLPs for the density field are usually very small and MLPs are even not necessary like that in TensoRF. Thus, MLP evaluations for the density field have a very small effect on the overall training time. The reduction in the number of MLP evaluations for color prediction is more valuable as the color MLP evaluation is typically one of the bottlenecks for the overall training time. Nevertheless, we appreciate the reviewer’s suggestion in making the argument clear and will adopt the suggestion by revising the argument on a single evaluation of color MLP in our final version. 2. __The SNeRG still requires many times of MLP evaluations:__ This statement is true because the SNeRG needs to obtain the diffuse color through many MLP evaluations first and then storing the obtained diffuse color for fast rendering. In inverse rendering, the diffuse color of each voxel is unknown and needs to be estimated in NeRF. Thus, the training of the SNeRG is identical to the original NeRF using the standard volume rendering. We agree with the reviewer that no MLP evaluation is needed for SNeRG in the rendering stage. But the SNeRG needs to train a standard NeRF first and then bakes the trained NeRF to a sparse grid. This training and baking in the SNeRG need many MLP evaluations. In our work, we show that the many times of MLP evaluations in the training stage can be avoided by our VFR, leading to a significant reduction in training time for our method (several minutes) and a better rendering quality than the SNeRG (days to train). 3. __The motivation behind the pilot network:__ * We found that our VFR without the pilot network is more likely to fail when the scene is of complex geometry. * Our VFR needs to focus on samples on the surface because the feature vectors of these samples will be integrated into one feature vector to predict the color of that surface point. If the feature vectors from the foreground and background are fused together (could happen when there is no coarse geometry in the early training stage), the following MLP may have difficulties in predicting the correct color. For objects that do not have an opaque surface, our VFR may not work well, and this potential limitation needs to be further investigated in our future work. The standard volume rendering does not have the convergence issue because the color prediction of each sample by the MLP is independent. * We believe larger networks tend to have more convergence issues in our model because larger networks have stronger learning ability, and they try to directly predict the correct color from the integrated feature vector of all samples on a ray and completely ignore the scene geometry. As the training of NeRF models is to find the correct geometry and the color field simultaneously, the above ignorance of geometry leads to failed convergence. * We will add the above explanations to our final version. 4. __Weird specularity on the pot in Fig 4:__ We believe that specularity on the pot is caused by insufficient modeling of the view-dependent effect using a comparably small MLP in the NerfAcc (based on Instand-NGP). 5. __Technical contributions of the paper:__ As aforementioned in our response to your question on SNeRG, the SNeRG aims at faster NeRF rendering without considering the training time. The SNeRG still needs to evaluate the MLP for many times in the training stage, while our VFR can directly integrate queried feature vectors to enable one large MLP evaluation in the training stage, leading to a significant faster training speed as well as a better rendering quality than the SNeRG. We agree with the reviewer that some works also employ a similar idea from different perspectives. But none of existing works investigate the direct integration of queried feature vectors as ours to accelerate the training of NeRF-like models and to improve the rendering quality for the per-scene reconstruction task. The mentioned works (i.e., StyleNeRF, EG3D, and StyleSDF) are in the field of 3D generation instead of neural inverse rendering from multiple views. Although these works employ feature integration, they all require one MLP (large or small in size) evaluation first for each sampled point and then integrate the yielded feature vectors from the MLP evaluations. We respectfully disagree with the reviewer that feature integration has been a standard technique in the NeRF community. We believe that it is not a common view among the NeRF community that queried feature vectors from feature representations like multi-resolution hash encoding can be directly integrated to enable one single-time MLP evaluation. This point is evidenced by the fact that many of the latest representative NeRF works are still based on the standard volume rendering (such as Instant-NGP, TensoRF, and the concurrent Zip-NeRF), although feature integration offers significant benefits as identified by our work. This paper formally introduces the volume feature rendering for per-scene reconstruction task, identifies the potential problem and proposes the solution, conducts extensive experiments and provides a detailed analysis. As such, we believe the contributions of this work are solid and the proposed methods are valuable to the NeRF community. --- Rebuttal Comment 1.1: Title: Reply to the authors Comment: I thank the authors for the rebuttal. While I agree that feature integration is not fully explored in the multi-view Nerf reconstruction task, still I want to point out that it's a technique that has been widely used in the generative NeRF tasks such as EG3D and StyleNeRF. The real contribution of the paper is to re-study it in the context of NeRF reconstruction. However, in the current format, the paper is claiming it's introducing a new method (which is not) for volume rendering without referring to previous works mentioned above. I believe that the paper should be rewritten to better discuss the relationship with previous works and state why it's non-trivial to adapt such a technique to the NeRF reconstruction task. --- Reply to Comment 1.1.1: Comment: Thanks for your valuable comments. We agree with you that the real contribution of the paper is to re-study feature integration in the context of NeRF reconstruction. However, we believe this contribution is still solid, considering that this work has achieved a significant reconstruction speed improvement compared with the state-of-the-art works on the NeRF reconstruction task. We will take your advice by revising the main contribution of this paper from proposing a new method to re-studying the feature integration in the field of multi-view NeRF reconstruction in the final version. In addition, we will also follow your comment to discuss the relationship with previous works and elaborating on why it is non-trivial to adapt such a technique to the NeRF reconstruction task. Specifically, generative NeRF models such as EG3D, StyleNeRF and StyleSDF also employ feature integration for the 3D scene generation task. However, these works focus on 3D consistency and fidelity of generated content using generalizable neural networks with NeRF’s structure. In comparison, 3D reconstruction from multi-view images using NeRF is a very different task from 3D generation. As per-scene optimization is required for NeRF reconstruction, the reconstruction speed is a critical issue in this field, but this problem has not been investigated in the existing generative NeRF works. This paper studies feature integration in volume rendering for fast reconstruction and demonstrate its effectiveness in the NeRF reconstruction task. Furthermore, adapting feature integration to the NeRF reconstruction task is non-trivial because the model may fail to converge due to the lack of coarse geometry at the early training stage. This paper is the first to find this convergence problem as feature integration has not been studied in the NeRF reconstruction task. Based on this finding, we propose a pilot network to solve this problem. In summary, we believe the comprehensive re-study of feature integration in the field NeRF reconstruction conducted in this work makes a solid contribution to the NeRF community.
Summary: The authors propose to perform feature accumulation first at each pixel location, and then pass the accumulated feature to a MLP to get the rendered color in NeRF. They show improved rendering quality over baseline methods on NeRF synthetic dataset and object-centric 360-degree captures. Strengths: 1. Idea seems simple and easy to reproduce. 2. Writing is good and paper is easy to follow overall. 3. Evaluation seems extensive, though some important aspects might be missing (see bullet point 1, 3 in the weakness section). Weaknesses: 1. Will jittering happen when rendering videos using the proposed method? I feel the proposed method might be more likely to suffer from jittering issues than standard volume rendering, as the large MLP network might introduce view-consistencies. It would be great to include some video results in the demo. 2. Training time improvement doesn't seem to be that big, compared with NeRFACC that the proposed method is built upon. Is it due that volume-rendering of a relatively high-dimensional features is slower than volume-rendering of RGB colors, hence cutting down the performance gain from reducing MLP evaluations? 3. Table 2 and Table 3 only contains PSNR comparisons against baselines. I don't think this is indicative enough of image sharpness; I would like to see SSIM and LPIPS comparisons to make a better judgement. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See bullet point 1, 2, 3 in the weakness section. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The limitations seem to be discussed in detail in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments on our manuscript. We address your concerns as follows. 1. __Jittering problem:__ We understand jittering is the problem of 3D inconsistency when rendering video with changing viewpoints. We do not observe the consistency difference between our rendered videos and those obtained by using standard volume rendering methods. As MLPs are conditioning on the queried feature vectors for both ours and standard volume rendering methods, a larger MLP does not introduce extra 3D inconsistency. In addition, considering that pure large MLP-based representations (e.g., the original NeRF and mip-NeRF) are able to render videos with high 3D consistency, we believe large MLPs might not be a source of jittering issue. We will add rendered videos in the final version to demonstrate 3D consistency. 2. __Training time improvement:__ Volume rendering of relatively high-dimensional features (e.g., 64 channels) instead of RGB colors has a negligible effect on the overall training time. Two main computations in the rendering process are feature querying and MLP evaluation, so the overall training time is not only dependent on the number of MLP evaluations. As shown in Table 2, to reach a similar quality, our method is 4+ times faster than the NerfAcc based on the standard volume rendering on the NeRF synthetic dataset. On the real-world 360 dataset, our method achieves a 0.79 dB improvement in PSNR while only taking 52% of its training time, compared with NerfAcc. As NerfAcc based on Instant-NGP is the most state-of-the-art method in fast training, we believe our training time improvement is significant and our method is valuable to the community. 3. __SSIM and LPIPS comparisons:__ Thanks for your suggestion. We add SSIM and LPIPS results in Tables 2 and 3 as follows. On the NeRF synthetic dataset in Table 2, our method outperforms the compared methods in all metrics including PSNR, SSIM and LPIPS while using the minimal training time. In Table 3, it is observed that our method achieves similar rendering quality (slightly better PSNR but slightly worse SSIM and LPIPS) as the concurrent Zip-NeRF on the real-world 360 dataset. However, our method requires significantly less parameters and less training time (11 minutes vs 5.2 hours) compared with Zip-NeRF. In addition, our method only uses 52% of NerfAcc’s training time to reach better PSNR and similar SSIM and LPIPS compared with NerfAcc. Table 2. Rendering quality and training time on the NeRF synthetic dataset on one RTX 3090. | | PSNR | SSIM | LPIPS | Time | --- | --- | --- | --- | --- | | NeRF | 31.69 | 0.953 | 0.050 | hours | Mip-NeRF | 33.09 | 0.961 | 0.043 | hours | TensoRF | 33.14 | 0.963 | 0.047 | 10 mins | Instant-NGP | 32.35 | 0.960 | 0.042 | 4.2 mins | NerfAcc | 33.11 | 0.961 | 0.053 | 4.2 mins | Our VFR: 6K | 33.02 | 0.960 | 0.055 | 0.97 mins | Our VFR: 20K | __34.62__ | __0.971__ | __0.038__ | __3.3 mins__ Table 3. Rendering quality and training time on the real-world 360 dataset on one RTX 3090. | | PSNR | SSIM | LPIPS | #Feature | Time | --- | --- | --- | --- | --- | --- | | NeRF | 24.85 | 0.659 | 0.426 | N/A | days | Mip-NeRF | 25.12 | 0.672 | 0.414 | N/A |days | NeRF++ | 26.21 | 0.729 | 0.348 | N/A |days | Mip-NeRF 360 | 29.11 | 0.846 | 0.203 | N/A |days | Instant-NGP | 27.06 | 0.796 | 0.265 | 84M |0.81 hrs | Zip-NeRF | 29.82 | __0.874__ | __0.170__ | 84M | 5.2 hrs |NerfAcc | 28.69 | 0.834 | 0.221 | 34M |11 mins |Our VFR: 20K| 29.48 | 0.830 | 0.233 | __34M__ | __5.7 mins__ |Our VFR: 40K| __29.92__ | 0.850 | 0.208 | __34M__ | __11 mins__ --- Rebuttal Comment 1.1: Comment: Thanks for the response. I have no other questions.
Summary: In the paper, a novel method for view synthesis is proposed. More specifically, for given posed RGB input images, multi-resolution hash grid features are trained which are rendered to the image plane via volume rendering. The rendered feature is that passed through an MLP, processed with Spherical Harmonics (SH) feature encoding schemes, and then the final color is predicted. The main difference of the proposed system to previous NeRF models is that in 3D space, only feature grids without any MLP layers are optimized, and the view-dependent color prediction is only performed on the rendered features, i.e. "deferred shading" is applied. Strengths: - Results: I believe the main strengths of the paper is that the proposed system leads to good results, i.e. the proposed system outperforms SOTA methods such as Zip-NeRF or Instant-NGP on the two dataset types in both, the reported training times as well as the reported view synthesis quality. - Ablation Study: A rather extensive ablation study is performed which I believe makes the manuscript stronger. It highlights the importance of the various components, and helps the reader to understand where the speed / quality improvements are coming from. - Limitation Section: The manuscript contains a rather extensive limitation section which is beneficial as it portrays a more complete impression of the method. - Manuscript organization: The manuscript is organized coherently and the information flow is rather easy to understand. Weaknesses: - Time comparison (L. 216, L. 229): The time comparison is, if I understood this correctly, not reported with running the methods on the same hardware, e.g. different GPUs (RTX 3090 vs V100) are used, and also times are simply adopted from other papers. As the time complexity is the main selling point of this work, I believe this should be reported more accurately by running the methods on the exact same hardware / setup. - Qualitative Results - Animations: While the reported method achieves high-quality view synthesis performance measured by per-image metrics, the "smoothness" / "3d consistency" between frames is not shown. As the proposed system uses features instead of directly color predictions, the consistency between frames might be lower than for previous methods while not having a significant influence on the per-image metrics. It would there be very helpful to also show animations, e.g. in the form of fly-through videos, to compare methods with this focus. - Unclear how the “pilot network” is used: Is simply the MLP size changed, or is this operating as the “default NeRF” models, i.e. per-point MLP calls are performed, and then color/density is aggregated via volume rendering? It would be helpful if the description of the pilot network would be made more precise. - Figure 3: Unclear. What exactly is happening in the individual blocks? Is a per SH feature predicted, and then they are summed together? What is the “bottleneck arrow” indicating? Why is this skip connection actually necessary? The exact math formulations is also not contained in the respective Section 3.4. It would be helpful if the description of the deferred rendering part is more precise and clear formulas are used so that the reader can better understand the exact technique. - Density prediction: How is the density obtained? Is this stored, similarly to the features, in a 3D hash grid? - Qualitative Comparison - Figure 5: I think in Figure 5, also results from Zip-NeRF should be shown. Typos / Unclear passages: - L. 5: either in occupancy grids -> either in the form of occupancy grids - L. 8: after neural network evaluation -> after neural network evaluations - L. 19: View synthesis is a task that synthesizes -> For the task of view synthesis, unobserved views are synthesized … - L. 36: near surface -> near the surface - L. 39: standard volume rendering technique -> standard volume rendering techniques - L. 58: NN runnings -> NN calls - L. 61: to enable one sample on the surface -> to enable one sample-based rendering - L. 69: modeling the specular effect -> modeling specular effects - L. 96: represent F by use of MLPs. -> represent F by using MLPs - L. 97: as N times MLP evaluations are required -> as N MLP evaluations are required - L. 97/98: Alternatively, recent research shows that -> please add respective citation - L. 100: organized in the forms of -> organized in the form of - L. 110: the many times NN running is … -> the many NN calls are … - L. 111: the importance of a large NN in realistic rendering -> the importance of a large NNs for realistic rendering - L. 112 - 114: “Considering the fact that valuable samples on the surface are close in terms of spatial position, the queried feature vectors are also very similar as they are interpolated according to their position” -> unclear meaning - L. 213: reports on the rendering -> reports the rendering - L. 223: Using learning features organized in hash grids -> Using learned features organized in hash grids - L. 249: Larger MLP (B, C) -> B does not indicate a “larger MLP” if I understood these symbols correctly. I would also suggest to structure the names and ablation study slightly different, e.g. with the typical “Baseline”, “+ GeLU”, “+larger MLP”, etc naming convention; currently, it is very cryptic and hard to understand. Missing Citations: - PlenOctrees For Real-time Rendering of Neural Radiance Fields, Yu et al., ICCV 2021 - Plenoxels Radiance Fields without Neural Networks, Yu et al., CVPR 2022 Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Are the time comparisons performed with respect to different hardware, e.g. RTX 3090 vs V100 GPUs? - Could the authors expand on the "3D consistency" / "smoothness" of in-between frames when rendering a fly-through, compared to non-feature-based methods such as ZipNeRF? - How exactly is the pilot network used? Is this predicting density and color per 3D sample point, or is this a smaller network but also operating on the rendered feature? - Could the authors explain the SH-based deferred rendering technique more precise? - Is the density predicted in 3D and stored in 3D hash grids, similar to the features? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - The authors discuss limitations of their work in the main manuscript. They mainly discuss the limitation of per-scene optimization and comparably slow / not yet real-time rendering times. I believe the limitation discussion is OK like it is; if some interesting failure mode could be illustrated with a figure, this could be interesting to add to the manuscript. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments on our manuscript. We address your concerns as follows. * __Time comparison:__ In light of your suggestion, we report on the training times of various comparison methods measured on the same GPU RTX 3090. As shown in Tables 2 and 3, we can draw the same conclusion as in the original manuscript. Our method achieves a better rendering quality while taking less training time compared with the comparison methods. On the NeRF synthetic dataset, our VFR outperforms the SOTA NerfAcc by 1.51 dB in PSNR but using less training time, or achieves a similar PSNR as NerfAcc but 4+ times faster. On the real-world 360 dataset, our method offers a better rendering quality (29.48 dB) than NerfAcc (28.69 dB) while using 52 % of NerfAcc’s training time. Compared with the concurrent Zip-NeRF that achieves the SOTA PSNR results using 5.2 hours, our method deliveries a slightly better PSNR using only 11 minutes, which is 28 times faster than Zip-NeRF. Table 2. PSNR and training time on the NeRF synthetic dataset on one RTX 3090. | | PSNR | Time | ------ | ------ | --- | | NeRF | 31.69 | hours | Mip-NeRF | 33.09 | hours | TensoRF | 33.14 | 10 mins | Instant-NGP | 32.35 | 4.2 mins | NerfAcc | 33.11 | 4.2 mins | Our VFR: 6K | 33.02 | 0.97 mins | Our VFR: 20K | __34.62__ | __3.3 mins__ Table 3. PSNR and training time on the real-world 360 dataset on one RTX 3090. | | PSNR | #Feature | Time | ------ | ------ | --- | --- | | NeRF | 24.85 | N/A | days | Mip-NeRF | 25.12 | N/A |days | NeRF++ | 26.21 | N/A |days | Mip-NeRF360 | 29.11 | N/A |days | Instant-NGP | 27.06 | 84M |0.81 hrs | Zip-NeRF | 29.82 | 84M | 5.2 hrs |NerfAcc | 28.69 | 34M | 11 mins |Our VFR: 20K| 29.48 | __34M__ | __5.7 mins__ |Our VFR: 40K| __29.92__ | __34M__ | __11 mins__ * __Qualitative Results - Animations:__ We do not observe the consistency difference between our rendered videos and those from the standard volume rendering. Both our VFR and the standard volume rendering method are based on global representations (representing a scene using a MLP and a multiresolution hash grid), which are recognized as the source of consistency between frames. Therefore, the rendered frames using our VFR are as smooth as those from the standard volume rendering. We will upload the rendered videos to demonstrate the frame consistency in the final version. * __Unclear how the “pilot network” is used:__ The pilot network is a small MLP operating as the “default NeRF” models, i.e., per-point MLP calls are performed, and then color/density is aggregated via volume rendering. * __Figure 3: Unclear:__ Thanks for pointing this out. For an integrated feature vector, the spatial MLP yields two feature vectors: SH feature vector and bottleneck feature vector. The SH feature vector will be spitted into small feature vectors for each SH coefficient, i.e., {$\mathbf{f}_0^0 | l=0$}, {$\mathbf{f}_1^{-1}, \mathbf{f}_1^{0}, \mathbf{f}_1^{1} | l=1$}, etc. In the SH feature encoding block, the SH feature vector $\mathbf{f}_m^l$ will be multiplied with SH coefficient (basis function) $Y_l^m(\mathbf{d})$ with view direction $\mathbf{d}$ to form one SH feature encoding vector $\mathbf{e}_m^l$, written as $\mathbf{e}_m^l = \mathbf{f}_m^l Y_l^m(\mathbf{d})$. A comprehensive SH encoding is derived by concatenating all encoding vectors as $\mathbf{E}$ = {$\mathbf{e}_0^0, \mathbf{e}_1^{-1}, \mathbf{e}_1^{0} ...$}, which will be further concatenated with the bottleneck feature vector (view direction independent). This concatenated vector is used to predict the final rendered color by the directional MLP. Based on the above description, we summarize answers to your questions as follows: * For each SH coefficient $Y_l^m(\mathbf{d})$, the spatial MLP will predict a small SH feature vector $\mathbf{f}_m^l$ for it. The SH feature encodings $\mathbf{e}_m^l$ will be concatenated to form a comprehensive encoding. * The bottleneck arrow indicates a bottleneck feature vector predicted by the spatial MLP. * The skip connection is necessary because we want the bottleneck feature vector to be independent to view direction for the prediction of diffuse color. We will add these details in the final version as suggested by the reviewer. * __Density prediction:__ The density is obtained by applying a linear layer to the queried feature vector from the underlying multi-resolution hash encoding. As this linear layer only has one output neuron, it has a negligible effect on the overall training and rendering times. * __Qualitative Comparison - Figure 5:__ The results from Zip-NeRF are similar to ours in visual. We will add the results from Zip-NeRF in Figure 5 as per your suggestion. * __Typos / Unclear passages:__ Thanks for pointing out these typos / unclear passages. We will implement these corrections in our final paper. For line 249, we apologize for the confusion caused by this small mistake. The correct description should be “Larger MLP (C) does increase … ”. We will revise the description of ablation study by following your suggestion in structuring the names and ablation study using the typical “Baseline”, “+ GeLU”, “+larger MLP”. * __Missing Citations:__ We will include these citations in our final version of the paper. * __Limitations--interesting failure mode:__ We have not found other failure modes related to our method in addition to the convergence issue (solved by our proposed pilot network). One potential limitation of our method is that it may not work well for semitransparent objects as our method integrates feature vectors first and then predicts a single final color. This property is not well reflected in the NeRF synthetic and real-world 360 datasets and requires further investigation in our future work. We will add this discussion to our final version. --- Rebuttal Comment 1.1: Comment: I thank the authors very much for this extensive and informative rebuttal. I do not have further questions from my side.
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude to all the reviewers for their precious time and thoughtful review of this manuscript. The comments and suggestions raised are extremely valuable and constructive, and very helpful for improving the quality of the manuscript. Please see the detailed response to each reviewer in the separate response.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Deep learning with kernels through RKHM and the Perron-Frobenius operator
Accept (poster)
Summary: The authors combines RKHM and Perron-Frobenious Operator to deep RKHM, a deep learning framework for kernel methods. By virtue of $C^*$-algebra, they manage to get a better bound on Rademacher generalization error and provide a clear connection with CNNs. Their theoretical analysis provides a new lens for deep kernel theory. Strengths: 1. Clear writing 2. Very solid results 3. Novel tools 4. Their work can be inspiring. 5. Experiments support their theories Weaknesses: As the authors say, more efficient methods specific to deep RKHM remains to be investigated in future work. It remains a problem to scale up to at least ImageNet to be useful. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Any theoretical understanding of why deep RKHMs might be better than nondeep ones? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors very adequately addressed the limitations. Very impressive. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### More efficient methods specific to deep RKHM As we stated in Rem. 5.6, we can apply random Fourier features to reduce the computational cost, but as you point out, methods specific to deep RKHM should be investigated as future work. We will study more about this topic and conduct some experiments with ImageNet to check the computational efficiency. ### Advantage of deep RKHM over shallow RKHM The motivation for studying deep kernel methods is that we try to combine the flexibility of deep neural networks with the representation power and solid theoretical understanding of kernel methods. From the theoretical point of view, the arguments in Subsection 6.2 about benign overfitting are valid only for deep RKHMs (i.e., $L\ge 2$). If $L=1$ (shallow RKHM), then the Gram matrix $G_L=[k(x_i,x_j)]$ is fixed and determined only by the training data and the kernel. On the other hand, if $L\ge 2$ (deep RKHM), then $G_L=[k_L(f_{L-1} \circ \cdots \circ f_1(x_i),f_{L-1} \circ \cdots \circ f_1(x_j))]$ depends also on $f_{1},\ldots,f_{L-1}$. As a result, by adopting the regularization using $G_L$, we can learn proper $f_1,\ldots,f_{L-1}$ so that they make the regularization term small and the whole network overfits benignly. As $L$ becomes large, the function $f_{L-1}\circ\cdots\circ f_1$ changes more flexibly to attain a smaller value of the regularization term. --- Rebuttal Comment 1.1: Comment: makes sense.
Summary: The authors introduce a generalization of RKHS for $C^*$ algebra valued kernels, called RKHM; they build networks by composing sequentially elements taken from a collection of RKHMs, one RKHM per layer. They prove generalization bounds for those networks. These networks output matrices at each layer. Strengths: I believe that a strength of the paper is that the generalization bound obtained in this paper for RKHM is better than the known ones for vvRKHS. It is unclear to me if RKHMs generalize vvRKHSs: maybe considering RKHM in the commutative $C^*$ of diagonal matrices is a way to represent a vvRKHS as an RKHM. If it is the case then the result of this paper is a better generalization bound for vvRKHS. Weaknesses: I feel that the paper is sometimes difficult to read. For example the name '$\mathcal{A}$-valued positive definite kernel' does not refer to the space $\mathcal{X}$ that characterizes the domain of the kernel; when defining deep RKHM knowing this information could help make explicit what the domain of $k_j$ as being $A_{j-1}\times A_{j-1}$. Two remarks in this direction are on some notations: - line 83: shouldn't the content of the brace in the definition of $M_{k,0}$ be $\sum_{i=1}^{n} \phi(x_i) c_i \vert n\in \mathbb{N}, (c_i\in A, i\leq n), (x_i\in \mathcal{X}, i\leq n) $ - equation line 195, maybe the notation: $(f_j\in \mathcal{M}_j)_j$, could recall that the optimization is over all the RKHMs that define the networks. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: In 'Connection with neural tangent kernel', is the aim of this paragraph to define a neural tangent kernel for deep RKHM? In 'Comparison to CNN', how long does the training take and what is the memory consumption of RKHM and CNN? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: In Conclusion and Limitations, I feel that the statement 'connections with existing studies such as CNNs and neural tangent kernel' is a bit of a strong statement as the authors explain in section 6.1 that CNN and RKHM do not really relate to one another and it is unclear to me how deep the connection to neural tangent kernel is. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Relation between RKHM and vvRKHS RKHM is a generalization of vvRKHS in the sense that we can reconstruct vvRKHS using RKHM (Please refer Thm. 4.13 of [2]). Since the output space of the functions in a vvRKHS is not necessarily a $C^*$-algebra, the connection between RKHM and vvRKHS is a little complicated. An important advantage of RKHM over vvRKHS is that we can incorporate the flexibility of $C^*$-algebra into kernel methods. Indeed, the idea of using the operator norm to improve the bound proposed in this paper comes from the perspective of RKHM, and cannot be obtained from the perspective of vvRKHS. As for the generalization bound, if the kernel has the form $k=\tilde{k}I$ for a complex-valued kernel $\tilde{k}$ and the $D\times D$ identity matrix $I$, then our result reduces the dependency on the output dimension of the vvRKHS. This is achieved by putting $D$ elements in a $D$-dimensional vector $v$ in a $\lceil \sqrt{D}\rceil\times \lceil \sqrt{D}\rceil$ matrix. Then, instead of considering $D\times D$ matrix-valued kernel for the vvRKHS, we only need $\lceil \sqrt{D}\rceil\times \lceil \sqrt{D}\rceil$ matrix-valued kernel for the RKHM. As a result, the dependency reduces from $D$ to $\lceil \sqrt{D}\rceil$. Since the action of the above kernel to a vector is described by the elementwise multiplication of the complex-valued kernel $\tilde{k}$ to the vector, the construction of the matrix-valued kernel is trivial. We can generalize the above argument to the case of $k=\tilde{k}a$ for general $a\in\mathbb{C}^{D\times D}$ by considering $av$ instead of $v$ to construct the $\lceil \sqrt{D}\rceil\times \lceil \sqrt{D}\rceil$ matrix. Since separable kernels are widely used, our result is valid in many cases. In more general cases, the relationship between RKHMs and vvRKHSs is more complicated, and investigating how we can apply this type of argument to vvRKHS with more general kernels is future work. ### Readability We will improve readability based on your comments. ### Connection with neural tangent kernel The aim of this paragraph is to define a neural tangent kernel for $C^*$-algebra network (a generalization of neural networks by means of $C^*$-algebra, please see [3]) and developing a theory for combining neural networks and deep RKHMs. Existing studies investigate the connection between neural networks and kernel methods through neural tangent kernels. We show the analogy of the existing connection for $C^*$-algebra networks and RKHMs. As we stated in Rem. 6.2, by virtue of the arguments in this paragraph, we can regard deep neural networks as one-layer (shallow) RKHM, which enables us to analyze the combination of neural networks and deep RKHMs. ### Comparison to CNN **Memory consumption** In the experiment of the comparison to CNNs in the paper, we used a CNN with $(7\times 7)$ and $(4\times 4)$-filters. On the other hand, for the deep RKHM we used in this experiment, we learned the coefficients $c_{i,j}$ in Prop. 5.1 for $j=1,2$ and $i=1,\ldots,n$. That is, we learned the following coefficient: - $n(=20)$ block diagonal matrices each of whom has four $(7\times 7)$-blocks (for the 1st layer) - $n$ block diagonal matrices each of whom has seven $(4\times 4)$-blocks (for the 2nd layer) Thus, the size of the parameters we have to learn for the deep RKHM is larger than the CNN. Since the memory consumption depends on the size of learning parameters, the memory consumption is larger for the deep RKHM than for the CNN. **Computational cost** In each iteration, we compute $f(x_i)$ for $i=1,\ldots,n$ and the derivative of $f$ with respect to the learning parameters. Here, $f=f_1\circ f_2$ is the network. For the deep RKHM, we compute the product of a Gram matrix (composed of $n\times n$ block diagonal matrices) and a vector (composed of $n$ block diagonal matrices) for computing $f_j(x_i)$ for all $i=1,\ldots,n$. Thus, the computational cost for computing $f(x_i)$ for all $i=1,\ldots,n$ is $O(n^2d(m_1+m_2))$, where $m_1=7$ and $m_2=4$ are the sizes of the block diagonal matrices. For the CNN, the computational cost for computing $f(x_i)$ for all $i=1,\ldots,n$ is $O(nd^2(l_1+l_2))$, where $l_1=7\times 7$ and $l_2=4\times 4$ are the number of elements in the filters. Since we set $n=20$ and $d=28$, the computational cost of the deep RKHM for computing $f(x_i)$ for $i=1,\ldots,n$ is smaller than that of the CNN. However, since the size of the learning parameters of the deep RKHM is large, the computational cost for computing the derivative of $f(x_i)$ is larger than that of the CNN. **Additional experiment** To compare the deep RKHM to a CNN with the same size of learning parameters (the same memory consumption), we did an additional experiment. We constructed a CNN with $(28\times 7\cdot 20)$ and $(28\times 4\cdot 20)$-filters (The size of learning parameters is the same as the deep RKHM) and replaced the CNN used in the paper with this CNN. Please see Fig. 2 in the PDF file attached at the top of the review part. The result is similar to that in the paper, and the deep RKHM also outperforms the CNN with the same learning parameter size. Since the size of the learning parameter is the same in this case, the computational cost for one iteration for learning the deep RKHM is the same or smaller than that for learning the CNN. ### Connection with existing studies In our paper, the term "connection" does not mean that the two objects are the same. In Sec. 6.1, we showed that whereas CNNs learn filters, deep RKHMs learn activation functions. As for the neural tangent kernel, as we also stated in the response of the first question, the arguments in Sec. 6.3 enable us to analyze the combination of neural and kernel networks. This makes our analysis more flexible. This paper is the first paper on deep kernel methods with RKHM, and we provide new features for deep kernel methods. We will investigate connections with existing methods deeper in future work.
Summary: This paper proposes deep Reproducing kernel Hilbert $\mathcal{C}^*$-module (RKHM), a deep learning framework for kernel methods, which generalizes RKHS by means of $\mathcal{C}^*$-algebra. In this setting, a map as the composition of functions in RKHMs is constructed. Theoretically, the authors develop a new Rademacher generalization bound of deep RKHM using Perron-Frobenious norm. The dependency of the bound on the output dimension is milder than existing bounds. Moreover, they show a representer theorem to guarantee that the solution of a given practical minimization problem is represented only with given data. In addition, they show connections of deep RKHM with existing studies such as CNNs, benign overfitting and neural tangent kernel. Furthermore, this paper presents a series of numerical experiments to support their theory and show the practical performance of deep RKHM. Strengths: This paper provides a new approach to analyze and understand deep kernel methods. In particular, the authors derives a generalization bound for deep RKHMs, while existing work focuses on shallow RKHMs. This bound also relaxes the dependence on the output dimension by using Perron-Frobenious norm. It is also interesting how this bound relates to benign overfitting. Moreover, this paper provides some experiments to support their theoretical findings. The paper is technically sound and the contents are very relevant to the community of NeurIPS. Weaknesses: The paper is well-organized and well-written. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can you explain how the generalization bounds in Theorem 4.1 and Theorem 4.5 depend on the input dimension? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have clearly addressed the limitations of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Dependency on the input dimension The input dimension depends on the generalization bound through the $\mathcal{A}$-valued kernel $k$. In Thms. 4.1 and 4.5, the bound depends on $\mathrm{tr}(k(x_i,x_i))$. Therefore, the dependency of the input dimension on the bound is determined by the dependency of the input dimension on the $\mathcal{A}$-valued kernel $k$.
Summary: The paper establishes properties on the composition of functions belonging to a RKHM, a generaliztion of RKHSs. The authors compute the Rademacher complexity of this function class and establish a representer theorem. Strengths: - The objects studied in the paper are well introduced. The writing is generally clear. - The paper adds another way formalizing the composition of linear models. Weaknesses: - The derivations of both the results of the paper are standard : the computation of Rademacher complexity using Cauchy-Schwartz + Jensen is essentially the same for all linear models. It is reproduced here with more terminology and higher dimensions but the steps are the same. The representer theorem which relies simply on the fact that the orthogonal component doesn't affect the objective is the same standard proof for kernels. - It is unclear what is gained from this additional abstraction. The paragraph on the connection to CNNs is unconvincing: CNNs learn the filters, whereas (composition) of linear models have fixed filters but learn the coefficients of the terms in the sum. It is difficult to argue that this much abstraction is necessary to derive this observation. *The additional abstraction leads to doubts over existence of the objects*: the existence of a "well defined" Perron-Frobenius operator is only established for a very specific case in Lemma 2.8, which appears to be artificial as it is a regular complex valued kernel multiplied by a matrix to create a multi-output kernel. - The term benign overfitting does not seem appropriate as it is discussed in section 6.2 of the paper : The term benign overfitting refers to the phenomenon observed that *complex function classes* can fit training data noise without loosing performance on the test set. In section 6.2 the discussion is on the *complexity of the function class*. The authors say that *regularization* in eq(2) with the operator norm of the Perron-Frobenius operator composition decreases the function class complexity and therefore leads to better generalization, which is normal and expected. This goes completely against the unusual empirical observations that led to the study of benign overfitting - which is that *complex function classes generalize without regularization*. How can benign overfitting be discussed by analyzing a **uniform** generalization bound ? - The general motivation of this work can be further developed (i.e slightly longer first paragraph) : Why should we formalize compositions of linear models ? The representer theorem is good to have but we have no convexity, so why use a composition of linear models instead of simply learning the feature embeddings as well if convexity is already lost ? Why are "deep" kernels worthy of study ? Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What exactly is being done when writing "Thus" in the proof of Lemma 2.7 ? What does well defined mean? Are you showing existence of such a map ? It is unclear. - How is Corollary 4.7 derived ? Can you give more details. In particular, how do you control the operator norm of the Perron-Frobenius operator without assuming that the intermediate $\phi_i$ are bounded ? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Limitations are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Derivations of the results Our Rademacher complexity analysis and representer theorem for RKHM use Cauchy-Schwarz, Jensen inequalities, and orthogonality arguments, as most of the results in the RKHS case. However, it is important to be precise that there are specificities and technical difficulties peculiar to RKHM that we adequately addressed. This is why we improved the bound. Our main idea is to use the operator norm to reduce the dependency of the generalization bound on the output dimension. In the existing study of RKHM, the trace of operators is considered instead of the operator norm. Availability of the operator norm is by virtue of the application of $C^*$-algebra, and whether we can combine the standard approaches, such as Cauchy-Schwartz and Jensen, with the operator norm or not is nontrivial. For example, in the proofs of Lem. 4.4 and Prop. 4.6, we used a semi-inner product. Also, in the proof of Prop. 4.6, we need another lemma (Lem. A.1) to split the operator norm of the Perron-Frobenius (P-F) operator $P_{f_{L-1}}\cdots P_{f_1}$ and the absolute value of $\sum_{i=1}^n\phi_1(x_i)(\sigma_ip^*)$. Applying P-F operators is also our main idea and not a standard approach. ### Connection with CNN and well-definedness of the Perron-Frobenius (P-F) operators Our idea to connect kernel methods and CNN is based on the concepts of $C^*$-algebra and deep RKHM. Our connection relies on the observation that by considering the $C^*$-algebra $\mathcal{A}$ of circulant matrices, we can represent the convolution as the product of elements in the $C^*$-algebra. Indeed, $\mathcal{A}$-valued positive definite kernel makes the output of each layer become a circulant matrix, which enables us to apply the convolution as the product of the output and a parameter. We mentioned in the paper that for deep RKHM, we learn the coefficients $c_i$, while for CNN we learn the filters $a_j$ (line 244-245). It seems reasonable to interpret this difference as a consequence of solving the problem in the primal or in the dual. In the primal, the number of the learning parameters depends on the dimension of the data (or the filter for convolution), while in the dual, it depends on the size of the data. Moreover, separable kernels ($\mathcal{A}$-valued kernel defined by a complex-valued kernel and an operator) are widely used in existing literature of vvRKHS; please see e.g. [1]. We might well say that it is the most used class of operator-valued kernels in the ML literature. It includes all scalar-valued kernels as a special case. Lem. 2.8 guarantees the validity of the analysis with Perron-Frobenius operators, at least in the separable case. This paper is the first paper that combines deep kernel methods with $C^*$-algebra. Our study provides a condition for "good kernels" by means of the well-definedness of the P-F operator. As we already mentioned in Conclusion of our paper, more detailed analysis for other kernels is future work. ### Benign overfitting We are sorry the term "regularization" may be misleading since the operator norm of the P-F composition depends on the training data. The 2nd term in Eq. (1) and the right hand side of Eq. (2) are expected to cause overfitting since these terms depend on the training data. (The Gram matrix $G_L$ depends on the data.) Standard existing regularizations, such as the one for kernel ridge regression (the 3rd term in Eq. (1)) and $l_p$ regularization of weight matrices for neural networks, do not depend on the data. Please note the 3rd term in Eq. (1) does not depend on the training data before applying the representer theorem. The term in Eq. (2) is different from these standard regularizations. If we try to minimize a value depending on the training data, the model seems to be more specific for the training data, and it may cause overfitting. Thus, the connection between the minimization of the 2nd term in Eq. (1) and generalization cannot be explained by the classical argument about generalization and regularization, and we need the argument of benign overfitting. In the 3rd experiment in Sec. 7, we can also observe benign overfitting. Please see Fig. 1 in the PDF file attached at the top of the review part. If $\lambda_1=0$ in Eq. (1) (do not minimize the term in Eq. (2)), then whereas the training loss becomes small as the learning process proceeds, the test loss becomes large after sufficient iterations. On the other hand, if $\lambda_1=1$ (minimize the term in Eq. (2)), then the training loss becomes smaller than the case of $\lambda_1=0$, and the test loss does not become large even the learning process proceeds. ### Motivation Let us first note the models we are considering are nonlinear. All of the functions $f_1,\ldots,f_L$ introduced in Sec. 3 are nonlinear functions. By virtue of the composition, the model $f=f_L\circ \cdots\circ f_1$ is also nonlinear with respect to the coefficients $c_{i,j}$ introduced in the representer thm (Prop. 5.1). The motivation for studying deep kernel methods is that we try to combine the flexibility of deep neural networks with the representation power and solid theoretical understanding of kernel methods. We cannot guarantee the convexity of the loss functions, but the situation is the same as the case of neural networks. We don't have the convexity of the loss functions for neural networks. Our motivation for studying deep RKHM is that we apply the novel perspective of RKHM to deep kernel methods in order to make the deep kernel methods more powerful. ### Answer of the Questions - Well-definedness of $P_f$ means "if $u=v$ for $u,v\in M_{k,0}$, then $P_fu=P_fv$". We are sorry the last formula in the proof of Lem. 2.7 should be $P_f\sum_{i=1}^n\phi_1(x_i)d_i$ (it is a typo). - Cor. 4.7 is derived by applying the inequalities $\Vert P_{f_j}\Vert\le B_j$ ($j=1,\ldots,L-1$) and $\Vert f_L\Vert\le B_L$ to the result in Prop. 4.6. These inequalities for intermediate layers are by the definition of $\mathcal{F}_j$.
Rebuttal 1: Rebuttal: ## To all the reviewers Thank you very much for your constructive comments. We address your questions and concerns below. We will revise our paper based on your comments and the response below for the camera-ready version. We attached a PDF file to support some of our responses here. Also, we provide references for our responses here. [1] Mauricio A. Alvarez, Lorenzo Rosasco, Neil D. Lawrence, et al. Kernels for vector-valued functions: A review. Foundations and Trends in Machine Learning, 4(3):195–266, 2012. [2] Yuka Hashimoto, Isao Ishikawa, Masahiro Ikeda, Fuyuta Komura, Takeshi Katsura, and Yoshinobu Kawahara. Reproducing kernel Hilbert $C^*$-module and kernel mean embeddings. JMLR, 22(267):1–56, 2021. [3] Ryuichiro Hataya and Yuka Hashimoto. Noncommutative $C^*$-algebra net: Learning neural networks with powerful product structure in $C^*$-algebra. arXiv: 2302.01191. Pdf: /pdf/7662aab840ad41f468d53e8dbed23bd3d610b796.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
ReTR: Modeling Rendering Via Transformer for Generalizable Neural Surface Reconstruction
Accept (poster)
Summary: The paper presents a learning-based framework on the well-studied neural surface reconstruction problem. The key contribution of this paper is to take the complex photon-particle interaction into account and present a more generalized pipeline rather than relying on volume rendering. The proposed framework released the power of the transformer to achieve enhanced feature representation of sampled points along the ray. Experiments on several popular benchmarks have shown the effectiveness of the proposed approach. Strengths: (1) The idea of modeling the complex light transport and releasing the flexibility from regular volume rendering is novel and interesting. (2) Overall, the paper is well presented and easy to follow. (3) The paper has achieved state-of-the-art generalizable neural surface reconstruction performance across different datasets. Weaknesses: (1) (Major) The idea of using 3D feature volumes with hybrid resolution is not new and has been proposed in NeuralRecon (https://arxiv.org/pdf/2104.00681.pdf). Besides, since there are two major differences (the elimination of FPN and the construction of multi-level projected feature maps) between the proposed hybrid extraction and the original one, it is better to make separate ablation studies to further verify the effect of the two variations. (2) (Major) For the ablation study of the occlusion transformer, compared with directly removing this module, a better ablation way is to attend every point’s feature as the input of the key embedding in the self-attention computation. This way ensures a fair setting (roughly same architecture and complexity) and the only difference is whether the later points contribute to the former ones. (3) (Minor) Visual comparison on view synthesis with other methods: since one of the main motivations of this work is to model the photon-particle interaction, I assume a major outcome is more robust rendering against the variations (blur, specular..) from input views. Thus, it is better to show some visual comparison with other baselines (SparseNeus, VolRecon, NeRF…) to verify this point. (4) (Minor)The main diagram (Figure 2) can be displayed in a clearer and more elegant way. Basically the part of transformer details would belong to either occlusion transformer or render transformer, rather than ‘feature fusion’. Besides, there is ambiguity on what the patches with different colors stand for. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Overall I think the idea of the paper is novel and technically sound, presenting another perspective of the mainstream methods based on volume rendering. Despite some technical concerns listed in weaknesses, now I lean toward accepting this paper. I expect authors to address my concerns by providing more comprehensive experiments to further show the effectiveness of this work. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Authors have discussed their limitations on rendering speed. I think another limitation is that the SOTA neural surface reconstruction method is still only comparable with a classic MVS-based baseline (MVSNet) at this time. But MVSNet and its extensions must be much faster to get a depth map than the rendering-based ones. So there is a long way to go for this area to further release the power of implicit representation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer’s time and insightful evaluations. **Q1. Hybrid resolution is not new, and justification of the proposed hybrid extractor with the original one.** Our main contribution is to rectify the oversimplification in traditional volume rendering by introducing a generalized form that better models incident photons. Yet HybridExtractor (HE) is a technical design that is advantageous in terms of efficiency and performance. We appreciate the reviewer highlighting the multi-level feature maps in NeuralRecon, and we will ensure to cite this paper and integrate discussion about this paper in the 3.4 section. To address the reviewer's concerns and showcase the effectiveness of HE in comparison with the baselines, we conducted additional experiments where HE was incorporated into the VolRecon framework. This was in response to the reviewer's observation regarding the "elimination of FPN and the construction of multi-level projected feature maps" as potential factors enhancing VolRecon's performance. Our experiments specifically evaluated the efficiency of both multi-level and single-level projected feature maps, resulting in a mean Chamfer distance of 1.35, marginally better than VolRecon's 1.38. At the core of HE is the ability to construct multi-level projected feature maps. In an attempt to address concerns about the exclusion of FPN, we developed a HybridExtractor variant that employs FPN for aggregating the features from the last two layers, resulting in a last-level projected feature. This model achieved a mean Chamfer distance of 1.34, indicating that the difference when not using FPN is minimal. However, it's worth noting that introducing FPN not only increases the parameter count but also enlarges the resolution of the last-level projected features, equating it with the second-level features. Our findings suggest that by constructing multi-level projected feature maps, we've essentially mirrored the multi-level feature aggregation function of the FPN, but in a more efficient manner. **Q2. Ablation study of Occlusion Transformer.** Thank you for the valuable suggestion. To clarify, the experiment was indeed conducted in a similar manner as suggested. Our occlusion transformer utilizes a mask, characterized by a diagonal matrix with its upper triangle masked out, ensuring that the subsequent point can only perceive points ahead of it. This design encourages the latter point to contribute to the object's surface formation. In the ablation experiment highlighted in $Tab. 3$ of the main document, we omitted this mask to observe any shifts in accuracy using chamfer distance as a metric. We acknowledge the confusion arising from the descriptions in $Tab. 3$ and lines 267-268 and will address this in our subsequent edition. **Q3. Rendering against variations.** The goal of this work is to accurately reconstruct the surface geometry given sparse input views. However, ReTR still demonstrates strong performance in view synthesis. Here we show rendering results in rebuttal PDF Figure 3; for the “Train” scene, we can see ReTR is robust to the lighting changes: the background (red region in $Fig. 3$) and the pavement (yellow region in $Fig. 3$) in front of the train, where VolRecon struggles to predict on such area. In addition, for “Scan 24” in DTU under different lighting, VolRecon again gives incorrect predictions about the front and back of the roof. Such a result shows ReTR is more robust under different settings. **Q4. To display Fig.2 in a more elegant way.** We will incorporate all the suggestions and revise them in the next version. --- Rebuttal Comment 1.1: Comment: Thanks for the valuable feedback from authors. Basically the feedback addressed most of my concerns. I expect the authors to have a better organized version in their final version on the language and diagram. Now I lean towards keep my original rating and accept this paper. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to provide your insights and for considering our work. We truly appreciate your feedback and will ensure that our final version will have improved language clarity and a better-organized diagram. Best, Authors
Summary: This paper proposes ReTR, a new architecture that leverages transformer to replace the traditional volume rendering process. The insight of the paper is that: the traditional volume rendering equation is oversimplified to model photon-particle interaction. Moreover, the color compositing function highly relies on the projected input view colors, and therefore overlooking intricate physical effects. To solve these two limitations, ReTR replaces the volume rendering equation with a render transformer. The attention map can be extracted from the render transformer to synthesize geometry details. An occlusion transformer is further introduced to obtain finer features. Instead of using FPN features and ResUNet, ReTR also utilizes features from different layers to construct multi-scale features. Experiments are conducted on the DTU dataset, Tanks & Temples, ETH3D, and BlendedMVS. The method is compared with state-of-the-art generalizable NeRF methods and generalizable surface reconstruction methods and achieves the best performance among them. Ablation studies also show the effectiveness of the network architecture. Moreover, ReTR surpasses SparseNeuS and VolRecon even without depth supervision. Strengths: I like the insights proposed in the paper that the traditional volume rendering equation is oversimplified. The solution that utilizes transformers to replace volume rendering is straightforward yet sound and effective. Experiments are exhaustive and validated the network designation. I also like the discussion of interpreting the render transformer the hitting probability. Weaknesses: More related work should be discussed in Section 2 for generalizable NeRF methods (NeuRays, CVPR22; Generalizable Patch-Based Neural Rendering, ECCV 2022, ...) and neural surface reconstruction methods (NeuS, ...). Notations in Equations (8) and (9) are not well explained, for example, what are $\mathbf{R}^f$ and $\mathbf{R}^{\text{occ}}$, and $\mathbf{f}_i^{\text{occ}}$ did not appear before Equation (10). There is also a typo at Line 232: **tanks and temples** instead of **tanks and templates**. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Though the proposed method is effective, mathematically, I did not see how the network architecture can model the geometry better than other neural surface reconstruction methods, such as SparseNeuS and VolRecon -- Especially when the network is trained without depth supervision. I would like to see more insights and explanations from the authors in the feedback. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The main concern of this paper is it requires a long training time, e.g. 3 days on a 3090. It comes with a cost when introducing transformers in the network architecture. Generally, it is not a big problem since the method is generalizable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer’s time and insightful evaluations. **Q1. More related work should be discussed in Section 2.** Thanks for the feedback. NeuRays(CVPR22) leverages neural networks to model and address occlusions, enhancing the quality and accuracy of image-based rendering. GPBR(ECCV22) employs neural networks to transform and composite patches from source images, enabling versatile and realistic image synthesis across various scenes. NeuS(NeurIPS21) uses SDF value to model density in the volume rendering to learn neural implicit surfaces, offering a robust method for multi-view 3D reconstruction from 2D images. In the next version, we will discuss more relative works and incorporate all the suggested discussions into Section 2. **Q2. What are $R^f$ and $R^{occ}$, and $f^{occ}_{i}$ did not appear before Equation (10).** $R^f$ denotes the collective set of tokens. Specifically, $f^{tok}$ represents the "meta-ray token", employed to capture the global representation (as delineated in line 175). The remaining components, represented by $f_N$, describe the point features distributed along the ray. $R^{occ}$, on the other hand, signifies the occlusion transformer, which employs $R^f$ for cross attention. $f^{occ}_{i}$ denotes the refined features of $x_i$ which obtained from the render transformer. We recognize the importance of clarity and precision and will ensure that these notations are further elucidated and any typographical errors rectified in our forthcoming version. **Q3. How the network architecture can model the geometry better ... Especially when the network is trained without depth supervision.** Traditional volume rendering determines each point's hit probability based on its inherent features, with weights for each point then calculated via a set Cumulative Distribution Function (CDF). In this methodology, the impact of all prior points on the current point's weight is merely reflected in their cumulative probability. In contrast, our ReTR approach introduces a "meta-ray token" that serves as a global token, assimilating all features within a ray through cross-attention. This is similar to the **CLS_TOKEN** in **ViT**, wherein the global token will learn the statistical properties of rendering. The process then becomes about more than just individual point features; it's also influenced by the overarching information within the "meta-ray token," making for a richer information pool. Furthermore, our model trains using a rendering loss and is even effective **without** depth supervision. The softmax operation within our framework encourages the model to learn a specific peak – typically the surface – making it the primary influence on the rendered color. We conducted an experiment substituting Neus's rendering with ours (refer to the rebuttal PDF $Fig. 1$). The result indicated that our rendering approach guides the network towards a superior weight distribution (high kurtosis) along the ray, even when applying to per-scene optimization methods that do not require depth supervision, further showing the effectiveness of our proposed rendering approach. Traditional volume rendering can, on the other hand, produce flawed outcomes by identifying areas of low hit probability. Such outcomes are detrimental to accurate surface reconstruction, as we've illustrated in $Fig. 1$ of our main paper. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: Thanks to the authors for the rebuttal. All of my concerns are addressed. I decide to improve my rating to this paper.
Summary: The paper proposes a new framework for generalizable neural surface reconstruction, which utilizes the mechanism of transformers to model the rendering process. The authors first derive a general form of generalizable volume rendering based on existing methods and identify its limitations. They then suggest improving upon this framework by introducing a new rendering approach based on learned attention maps and performing over-ray accumulation in feature space rather than color space. Experiments are conducted on four different datasets and compared to recent baselines, demonstrating superior performance in generalizable reconstruction. Strengths: - The paper is well-written and exhibits a smooth flow. The authors effectively convey the motivation behind their methods, providing necessary background information and comparing against recent baselines. Additionally, the authors ensure that the reader can easily follow the logical progression of the paper. - Section 3.1 presents a general framework that serves as a well grounded basis for existing methods. The authors' identification of limitations within this framework offers valuable observations, contributing to the overall storyline and motivation of the paper. - The results presented in both the main paper and supplementary material demonstrate the superiority of the proposed method compared to the VolRecon baseline. - The authors provided an ablation study on the various components of the method, effectively differentiating their individual contributions and providing a solid understanding of their impact. Weaknesses: - The section describing the reconstruction transformer appears to be incomplete and confusing due to several missing details and explanations: - It is unclear how equation 6 (and its improvement in equation 10) fit into the general framework proposed in equation 5. This confusion arises because the final MLP from feature to color does not align with the framework. Additionally, the weight function and color function are not explicitly provided. - The FeatureFusion operation is not defined, leaving ambiguity in understanding its purpose and implementation. - The definition of the "meta-ray token" $f^{tok}$ for a scene is unclear, specifically whether it pertains to per-image (per-ray/pixel) or per-scene information, and how it differs from the image features $f^{img}$. - The meaning of $F_i$ in line 161 is not provided or explained. It is crucial for the authors to address these misunderstandings and revisit the missing definitions in order to clarify the concepts. - While it is acknowledged that the general form presented in section 3.1 oversimplifies the modeling of light transport in 3D scenes, it is hard to perceive how the mechanism of cross-attention over ray samples effectively models complex photon-particle interactions. Real interactions typically occur in spatial domains, whereas the suggested approach focuses on interactions over ray samples. It is suggested that the authors either temper these claims or provide further explanation on how their framework accounts for intricate global physical effects that encompass both global and local physical effects. - The qualitative comparison is somewhat limited as it only includes a comparison to VolRecon. It is essential for the authors to provide visual comparisons with other methods as well, particularly SparseNeuS, to offer a more comprehensive evaluation. - The authors have not presented timing evaluations of their method in both training and evaluation scenarios. Given that the limitation section highlights timing as a significant drawback of generalizable methods, it is important for the authors to address this by providing timing evaluations to enhance the paper's completeness. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - When evaluating the SparseNeuS baseline, did the authors incorporate depth supervision as well? My understanding is that both ReTR and VolRecon utilize depth supervision during training, while SparseNeuS does not. This raises concerns about the fairness of the comparison between these methods. - In Section 3.2, the paper suggests key rendering properties that the system should possess. However, there is no specific mention of the requirement for the weights to sum up to 1, indicating that all rays are eventually occluded. While this property is not explicitly described, it seems to be employed later in the paper using softmax. It would be helpful if the authors provided further clarification on this matter. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have discussed limitations in the supplementary material. However, it is necessary to present the main limitation of efficiency in the main paper, even if briefly in the conclusion section. Additionally, providing quantitative results that demonstrate the tradeoff between the number of parameters and training/rendering time would greatly benefit the presentation of the method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer’s time and insightful evaluations. **Q1. Reconstruction’s description appears incomplete and confusing.** **1.1. how does equation 6 (and its improvement in equation 10) fit into the general framework proposed in equation 5?** Thanks for the feedback. Since our rendering operates within the feature space, $Eq. 6$ naturally deviates slightly from $Eq. 5$. In the revised version, we elucidate these distinctions to ensure that readers can seamlessly navigate the transformations between the equations. Specifically, features will be aggregated using cross-attention (which can be analogized to the weight function in $Eq. 5$, with the attention map representing the weights). Subsequently, these aggregated features will be employed to predict RGB values (analogous to the color function in $Eq. 5$). Color can be interpreted as a characteristic of each feature point. In $Eq. 5$, we propose that the feature at each point can be aggregated in a manner analogous to RGB, enabling us to deduce the primary feature points. This can be mathematically expressed as: $C(\mathbf{r})=\mathcal{C}\left(\sum_{i=1}^{N} \mathcal{W}\left({F}_1, \dots,{F}_i\right) {F}_i\right)$, Where the $\mathcal{C}\left( \cdot \right)$ represents the color function that maps the feature into RGB space. Building on this, we adapt the render transformer formulation in $Eq. 6$: $C(\mathbf{r})=\mathcal{C}\left(\sum_{i=1}^{N} softmax\left( \frac{q(\mathbf{f}^{tok})k(\mathbf{f}^{f}_i)^\top}{\sqrt{D}}\right)v(\mathbf{f}^{f}_i)\right)$, Where the $\mathcal{W}\left( \cdot \right)$ translates to $softmax\left( \frac{q(\mathbf{f}^{tok})k(\mathbf{f}^{f}_i)^\top}{\sqrt{D}}\right)$. Furthermore, in our approach, $\mathcal{C}\left( \cdot \right)$ is operationalized as an MLP, which serves to decode the integrated feature into its corresponding RGB value. We will incorporate this explanation into the next version. **1.2. Explanation of FeatureFusion Block.** The FeatureFusion Block is designed to merge both the volume feature and projected features at each point into a single integrated feature. This merged feature then undergoes subsequent operations. In detail, the volume and projected features are concatenated and passed through a transformer for refinement. The refined projected features, when combined with the relative direction of each image, undergo a MLP to infer the individual weights of the projected features. The outcome—a weighted sum of the projected features—is then concatenated with the refined volume features. This resultant feature serves as the foundation for the ensuing processes. **1.3. Explanation of "Meta-ray" token.** The "meta-ray token" acts as a universal token across the network, shared throughout scenes. At the outset of training, this token is initialized and engages in every rendering procedure during training, positioned as the primary token in the input sequence of sample points, akin to the CLS token in ViT. Distinctly, it does not derive from the point features of the input and remains separate from positional coding. As the network evolves continuously, this token gains the capability to encode specific statistical properties intrinsic to rendering. **1.4. Explanation of $f^{img}$ and $F_i$ in line 161.** $f^{img}$: This notation represents an intermediary phase of a feature from the input views, as extracted by our hybrid extractor. Later on, the conjunction of $f^{img}$ and $f^{v}$ amalgamate to create $f^f$. We recognize the need for clarity here and will address this in our upcoming version. $F_i$ in line 161: This notation represents the set comprising $f^{img}$ and $f^{v}$. We elucidate the above-mentioned issues in our revised version to alleviate any ambiguities. **Q2. Real interaction of light occurs in spatial domains… temper the claims.** We will tune down the claim the claims about particle interactions. Our primary contribution is the identification and rectification of inherent limitations in the widely used volume rendering approach, specifically its oversimplification of incident photon modeling. This issue has been largely unaddressed in prior works. However, modeling real interaction between photon particles face many challenges, such as the high cost of computation. We approximate this problem as interaction over ray samples, as we show in $Sec. 3$. In addition, our approach is much more cost-effective. We hope such observation can pave a new research direction for the community. **Q3. Qualitative comparison with SparseNeus.** Please kindly refer to the rebuttal PDF $Fig. 2$ for a comparative analysis between our proposed ReTR and SparseNeus. It's important to highlight that SparseNeus requires fine-tuning on specific scenes to attain the reported outcomes, whereas our approach involves direct inference on previously unseen scenes. We present results for both the direct inference and the fine-tuned scenarios for SparseNeus. In both instances, ReTR consistently demonstrates superior performance over SparseNeus. **Q4. Time evaluation.** Neus: RTX 2080ti, 16 hours per scene training. SparseNues: Two RTX 2080ti; pretraining takes 3 days and requires 20 mins per scene fine-tuning. VolRecon: single A100, pretraining takes 3 days; no further finetuning is needed. ReTR (ours): single RTX3090, pretraining takes 3 days; no further finetuning is needed. Inference takes about 30 seconds to render one image (DTU). **Q5. Depth supervision.** In $Tab. 5$ of the main manuscript, we compare ReTR against various baselines, considering both scenarios: with and without depth supervision. Notably, ReTR consistently surpasses these baselines in both settings. **Q6. Weight of the ray sum to 1.** For a more in-depth exploration of this issue, please refer to Appendix $B.1$. --- Rebuttal Comment 1.1: Title: Post rebuttal Comment: I want to thank the authors for making an effort in their rebuttal and addressing the reviewers' concerns. The authors addressed most of my concerns. I still don't fully agree with the presented photon interaction over ray samples, and I feel like this discussion is a bit redundant. I lean on keeping my original score, since the paper requires additional clarifications. I suspect the paper requires a big revision to incorporate all the detailed explanations presented in the rebuttal.
Summary: This works focus on generalizable asset reconstruction: given a few posed images, predict the 3D representations using a network. Instead of using volume rendering to compute the transmittance, the authors propose to use transformer on the sampled points to compute the weight of each point. Besides, the author also improve the CNN architecture for feature extraction. Extensive experiments are conducted on multiple datasets, demonstrating better performance. Strengths: 1. Better performance compared to previous compared with previous SOTA methods 2. Code is attached and will be release. 3. method is well explained Weaknesses: 1. Compared to previous methods SparseNeus and Volrecon, this work seems somewhat incremental. Major difference is using a hybrid CNN extractor (Fig 3) and a new transformer architecture. The "occlusion transformer" and "render transformer" seems to be two transformers with fancy names, and I don't see significant difference from the transformer in volrecon. Though Volrecon is a CVPR2023 paper, but apparently the authors use its codebase for develop , as can be seen in the code.zip in the supplementary. 2. I don't agree with the "rethink" title, equation5 doesn't make too much sense to me. It's not explained why the equation can satisfy the occlusion-aware, and especially no guarantee of consistency across multi views. For example, for a sampled point $x$, its weight may be $1$ for the ray $r_1$, but may be $0$ for the ray $C(r_2)$ even when $x$ is the nearest point in ray $r_2$. Therefore, I'm not fully convinced with that Eq 5 is a better modelling of the rendering, and the "rethinking" seems be some sort of exaggeration. Furthermore, given the goal of this work is reconstruction, I don't see why loosing the physics constrain in rendering can benefit the learned geometry. ## Justification of rating. 1. Pros: results are solid (multiple datasets, compared with SOTA baselines), code available. 2. Cons: Somewhat incremental, "rethinking" seems a bit exaggeration. Overall I lean to a borderline accept, as no big technical flaws, but I'm not very confident. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please check the weakness section. Besides: 1. This may not be the weakness of this paper, but it's common in the research of this topic. When it's called "generalizable", why it can not generalize to the unseen region of the object? For example, given images of the front views of the statue, why not generalize it to the backview in reconstruction. 2. What's the PSNR compared to other baselines Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Discussion of limitation and broad impacts in the supplementary. No license/asset description according to https://neurips.cc/public/guides/PaperChecklist . But I don't penalize it in the rating. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer’s time and insightful evaluations. **Q1. Incremental work of SparseNeus and Volrecon.** We respectively disagree with this. Our main contribution is to address the intrinsic limitation of the extensively-used volume rendering (line 25-37, $Fig. 1$ in the main paper), pointing out that it is an oversimplification of incident photon modeling (line 137-138), and it over-relies on input view projected colors (line 139-140). This is largely overlooked in previous works, including SparseNeuS and Volrecon. Then, we show that these limitations can be overcome by generalizing the volume rendering to the reconstruction transformer, which allows the modeling of complicated photon properties in the feature space. In contrast, VolRecon and SparseNeuS still rely on volume rendering that condenses complicated photon-particle interaction into a single density value. We also show that our approach significantly leads to more confident and accurate surface prediction both qualitatively ($Fig. 1$, $(c)$ in the main paper) and quantitatively ($Tab. 1$ in the main paper). **Q2.1 “Rethink” seems soft of exaggeration, and why loosing physics constrain benefits learned geometry.** We appreciate the feedback. The term "Rethink" in our context reflects a process of reevaluation and reconsideration of volume rendering. We observed the oversimplification issue in the current widely used rendering pipelines and were prompted to seek an alternative approach that is more suitable for the reconstruction pipelines. **Q2.2 “Why the equation can satisfy occlusion awareness and multi-view consistency.** The weighting term in $Eq. 5$ (line 160-161) can be further reformulated to $Eq. 10$ (line 196-197), as described in our main paper. The occlusion-aware process is done by the occlusion transformer that only allows the later points to interact with the points in front of it and the meta-ray token. Specifically, we applied an attention mask that masked out the top part above the diagonal. This is to make sure that for a given point $x$, it can only interact with the points in front of it in order to encourage the points to respond to the preceding surface. The multi-view consistency is done by conditioning the view features, similar to the previous MVS-based methods, such as VolRecon. **Q3. Why it can not generalize to the unseen region of the object?** Typically “generalizable” is defined as the ability of the model to predict an unseen scene as we follow this setting from previous works of literature (SparseNeus, Volrecon), but this is an interesting proposal. Traditionally, we approach this task from a **perceptual** standpoint. In this light, we solely generate surface geometry from three views, rigorously constraining the result with the ground truth, eliminating any element of randomness. However, reconceptualizing this task from a generative vantage point is intriguing. This would enable the network to "imagine" aspects previously unseen or uncharted. We believe this offers a promising avenue for exploration. **Q4. PSNR comparison.** The goal of this work is to accurately reconstruct the surface geometry given sparse input views. Like many previous studies, we use the chamfer distance to measure the accuracy of our reconstructed meshes. Although our main focus isn't on novel view synthesis, we've included this aspect in the Appendix for a comprehensive review. Please refer to the $Tab. 3$ (line 75-76) in the Appendix.C for detailed results. It's worth noting that our method outperforms VolRecon in the novel view synthesis. For additional qualitative results, please see the rebuttal PDF $Fig. 3$. --- Rebuttal Comment 1.1: Comment: Thank you for your constructive feedback. We acknowledge your reservations about using the term "rethinking" in our title. In light of your and AC's feedback, we've decided to revise our title to "**ReTR: Modeling Rendering via Transformer for Generalizable Neural Surface Reconstruction**." We believe this better encapsulates the essence of our paper without overemphasizing the novelty.
Rebuttal 1: Rebuttal: We deeply appreciate the reviewers for their thoughtful feedback and time invested in evaluating our work. We're heartened by Reviewer DVpn's commendation of our solution as "interesting in generalizable neural surface reconstruction" and by Reviewer TPgh's acknowledgment that our "results are solid." We are pleased that Reviewer Rozf found our paper “well-written” and “presents a general framework serves as basis for existing methods”. Moreover, we are encouraged by Reviewer feWL characterization of our work as "sound and effective", noting the "insights proposed in the paper". And by Reviewer 1Qeo's praise for our “novel and interesting” idea, complimenting that our paper is “well presented and easy to follow”. We will address each of the reviewers' additional comments in our subsequent responses. Thank you again for your invaluable feedback. Pdf: /pdf/48977c6206f57c4984fa5657de4715ea957a7009.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces a interesting solution for volume renderings in generalizable neural surface reconstruction by leverage Transformers to predict depths and colors from feature volumes. The results on sparse view reconstruction prove its useness. Strengths: The authors identify the limitation and derive a general form of volume rendering and propose ReTR, a learning-based rendering framework utilizing transformer architecture to model light transport. A hybrid feature extractor is also proposed for achieving better performance. Weaknesses: Can we replace the volume rendering in the optimization-based methods (e.g. NeuS / VolSDF) with the learned Transformer ? Or is the solution only works for generalizable neural surface reconstruction? What is the performance in scene-level sparse view reconstruction, i.e., Replica/ScanNet. Will HybridExtractor also works for volume rendering based methods (e.g. SparseNeuS / VolRecon) ? Why Transformer? Will CNN/MLP also works for this design? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the weakness above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See the weakness above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer’s time and insightful evaluations. **Q1. If our learned transformer can replace volume rendering for optimization-based methods.** Yes, we've conducted qualitative experiments, as shown in Figure 1 of the rebuttal PDF, using optimization-based methodologies. Rather than traditional methods, we used spatial points and applied our learning-based rendering for scene optimization. Our method notably outperforms NeuS qualitatively (high kurtosis of weights along the ray), hinting that transformers might introduce a new direction in optimization-based reconstruction, similar to NeuS and VolSDF. **Q2. Performance in scene-level sparse view reconstruction.** In addition to evaluating the DTU and BlendedMVS datasets, we further show the reconstruction results of our ReTR on scene-level datasets, specifically ETH3D and Tanks & Temples, following the same setting as VolRecon. The outcomes of these evaluations are visually represented in $Fig. 6$ of the main paper. More results will be included in the appendix in the next version. **Q3. Will HybridExtractor also work for volume rendering-based methods?** Yes, the HybridExtractor (HE) is adept at extracting both low-level to high-level features. Additionally, it potentially diminishes computational complexity by circumventing the use of the encoder segment of the 3D U-Net. To further substantiate our proposition, we integrated our HE into the VolRecon, resulting in a notable enhancement in performance, achieving a mean cd of 1.35. **Q4. Why Transformer?** Transformers have been extensively validated across diverse tasks owing to their exceptional capability in handling sequential feature interactions. The light transport effect can analogously be construed as the interaction between a photon and a particle. As we model the light interaction as rays, the transformer becomes a nature choice as it can model the sequence of points along rays effectively. In contrast, CNNs and MLPs appear to demand more tailored designs. Thus, we opted for the transformer to implicitly model the light transport effect. Nonetheless, we deeply appreciate the reviewer for highlighting this matter, suggesting an avenue worthy of future exploration. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal. Comment: Thanks for the rebuttal, most of my concerns are addressed. I tend to keep my score. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our efforts to address your concerns. We appreciate the time and expertise you've invested in reviewing our work. Best, Authors
null
null
null
null
null
null
Learning Large Graph Property Prediction via Graph Segment Training
Accept (poster)
Summary: This work uses graph segment training to reduce memory requirements in order to address the scalability concerns of training with large graphs. Additionally, to reduce computation time, historic embeddings for graph segments are stored. These historic embeddings are updated once for all embeddings at the end of training and periodically dropped during training according to the Stale Embedding Dropout to reduce the bias of stale embeddings to the prediction head and loss function. Strengths: Graph segment training is an intuitive and seemingly successful method for achieving high graph property prediction on large graphs while reducing maximum memory requirements and total runtime. The study of stale embeddings in this work is particularly important as embedding table lookup have been utilized in prior studies. [1] Zhang, Shichang, et al. "Motif-driven contrastive learning of graph representations." arXiv preprint arXiv:2012.12533(2020). [2] Tan, Qiaoyu, Ninghao Liu, and Xia Hu. "Deep representation learning for social network analysis." Frontiers in big Data 2 (2019): 2. Weaknesses: The experiments only evaluate GST and its several variants. Some evaluation on other large graph training techniques should be included. Although some of these are used for node classification, they can easily be adapted and evaluated for graph property prediction. [3] Zou, Difan, et al. "Layer-dependent importance sampling for training deep and large graph convolutional networks." Advances in neural information processing systems 32 (2019). [4] Wei-Lin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, and Cho-Jui Hsieh. Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 257–266, 2019. [5] Huang, Kezhao, et al. "ReFresh: Reducing Memory Access from Exploiting Stable Historical Embeddings for Graph Neural Network Training." arXiv preprint arXiv:2301.07482 (2023). In particular, [5] also utilizes historical embeddings for training on large graphs. A comparison with this method specifically would be useful to include. Technical Quality: 3 good Clarity: 3 good Questions for Authors: When partitioning the graph, what is done with the connecting edge(s) between segments? Is this feature information lost? Would we see performance improvements if this structure information was reincorporated into the model. For example, if each graph was processed twice, using different segmentation sets each time, then the potential loss of structural information can be recovered. Rather than performing stale embedding dropout, would it be reasonable to simply update the segment embedding table at fixed intervals? Or, update stale embeddings rather than dropout according to the stale embedding keep probability. In Eq. 2, why is the effect of stale historical embeddings being characterized by the difference between the L(F’(h_s^{(i)}⊕\hat(h)_j^{(i)})) and L(F’(⊕h_j^{(i)}))? It doesn’t seem that the sampled graph segments are being accounted for in the calculation of the second loss. To mitigate differences not caused by stale historic embeddings, would it not be better to concatenate the sampled graph segments in both loss calculations? Perhaps this is a confusion of notation, in which case, j should not be overloaded to represent either any segment in J^{(i)} or any segment in J^{(i)} excluding those in S^{(i)}. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback and insightful comments. We appreciate the confirmation that the proposed method is important and our proposed method is successful. We hope we are able to address the review’s concerns, and respectfully ask to consider increasing the score. **Q1. Some evaluation on other large graph training techniques should be included. Although some of these are used for node classification, they can easily be adapted and evaluated for graph property prediction.** We thank the reviewer for the suggestion. We will integrate the mentioned references into the related work section and provide a discussion. It's worth mentioning that we did assess a baseline, GST-One, that trains on a singular sampled segment for each cycle and, when contextualized, shares significant similarities with Cluster-GCN [4]. Additionally, we'd like to emphasize that adapting prior methods, such as [5] which uses historical embeddings, to our scenario is impractical. The method in [5] retains a historical embedding for every node, which makes it unfeasible to store all the historical embeddings given our conditions, i.e., for the malnet-large dataset, the embedding table needs to store approximately $2 \times 10^9$ rows. Taking into account the hidden dimension, this amounts to about 1TB of memory. The memory requirement would further surge if we raise the count of training graphs. [5] Huang, Kezhao, et al. "ReFresh: Reducing Memory Access from Exploiting Stable Historical Embeddings for Graph Neural Network Training." arXiv preprint arXiv:2301.07482 (2023). **Q2. When partitioning the graph, what is done with the connecting edge(s) between segments? Is this feature information lost? Would we see performance improvements if this structure information was reincorporated into the model.** We thank the reviewer for the question. There's a possibility of some information loss, but our empirical studies show it doesn't greatly impact performance. Addressing the reviewer's query, we delve deeper into Vertex-Cut partition algorithms in Table 5 of Appendix. Theoretically, Vertex-Cut approaches, which distribute edges among various machines and replicate nodes as required, might experience less information loss compared to Edge-Cut techniques. Our hands-on results indicate that all partitioning algorithms which preserve local structure exhibit comparable performance. This infers that edges linking different segments don't substantially influence the ultimate prediction accuracy. **Q3. Rather than performing stale embedding dropout, would it be reasonable to simply update the segment embedding table at fixed intervals? Or, update stale embeddings rather than dropout according to the stale embedding keep probability.** We thank the reviewer for the comment. While it is feasible to proceed in that manner, it's important to highlight that refreshing the entire segment embedding table might require more time than completing one training epoch (as the number of segments is usually an order larger than the number of segments we trained on every iteration), resulting in significant overhead. Furthermore, our results indicate that GST+EFD outperforms GST in accuracy. Hence, we believe it's improbable for this alternative to surpass our suggested algorithm in either efficiency or accuracy. **Q4. In Eq. 2, why is the effect of stale historical embeddings being characterized by the difference between the L(F’(h_s^{(i)}⊕\hat(h)_j^{(i)})) and L(F’(⊕h_j^{(i)}))? It doesn’t seem that the sampled graph segments are being accounted for in the calculation of the second loss. To mitigate differences not caused by stale historic embeddings, would it not be better to concatenate the sampled graph segments in both loss calculations? Perhaps this is a confusion of notation, in which case, j should not be overloaded to represent either any segment in J^{(i)} or any segment in J^{(i)} excluding those in S^{(i)}.** We thank the reviewer for the comment. Indeed, the reviewer has a valid point. We reloaded $j$ to save some space in notation. We will try the reviewer’s recommendation in the final version. --- Rebuttal Comment 1.1: Comment: The authors have successfully addressed most of my concerns. The main contribution of this work is realizing GNN training with very large graphs on limited resources. The proposed algorithm is simple and effective. However, such type of solutions would require a ton of technical tricks and engineering effort for arbitrary large graph data. Fortunately the authors should have implemented those on the set of data they investigated. Open-sourcing the solutions, no matter where the paper is accepted, would be the critical factor of the impact and contribution of the work. Generally describing the segment-then-train idea and reporting a set of numbers from the authors' own side would not help the community significantly. I'll raise my score. I hope the authors would consider seriously about the real impact of the work. --- Reply to Comment 1.1.1: Title: Reply by Authors Comment: Thank you for your insightful review and for recognizing the value of our work on GNN training with large graphs. We appreciate your emphasis on the importance of open-sourcing our solutions. In response to your comments, we are pleased to inform you that we have already released the code. We believe this step aligns with your suggestions and will contribute positively to the community.
Summary: This paper aims to predict properties of very large graphs, by segmenting the large graph into multiple subgraphs with the existing graph partitioning algorithm and then learning over segmented subgraphs where gradients are calculated on some of them for memory-efficient training. Also, to further efficiently train with segmented subgraphs, the authors use the embedding table that stores subgraph representations and provides embeddings for subgraphs that are not back-propagated, while calculating embeddings for remaining subgraphs. Moreover, in order to prevent the staleness issue where subgraph representations in the embedding table are outdated, the authors not only finetune only the property prediction head as the post step of training but also randomly drop some subgraph representations with regard to their stalenesses. The authors validate the proposed method, namely Graph Segment Training (GST), with its variants (GST+EFD) on multiple large-scale graph datasets, showing the efficacy of the proposed methods. Strengths: * The idea of training with partitioned subgraphs of the large graph for its property prediction problem is interesting, novel, and highly valuable to the graph community. * Each proposed ingredient (e.g., historical embedding table, prediction head fine-tuning, and stale embedding dropout), which composes the final GST+EFD architecture, has its own unique benefit in learning with segmented subgraphs of the large graph; having solid contributions. * The theoretical results show the benefit of stale embedding dropout, which randomly drops some subgraphs with respect to their stalenesses and results in reducing bias from stale embeddings. * The proposed GST and GST+EFD outperform the full graph training mechanism while being much more efficient. * This paper is extremely well-written. All contents are clear and easy to follow. Weaknesses: * I don't see any. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: * How to fetch the embeddings of subgraphs from the embedding table, if the embedding table does not have the representations for them. For example, at the beginning of training, the historical embedding table is empty and there may be no representations of partitioned subgraphs. * It is unclear why GST+EFD can outperform GST, given that GST calculates all subgraph embeddings every time, while GST+EFD sometimes uses the stable subgraph embeddings from the embedding table. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors do not discuss the limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback and insightful comments. We appreciate the reviewer for confirming that our paper is novel, extremely well-written and highly valuable in the graph community. **Q1. How to fetch the embeddings of subgraphs from the embedding table, if the embedding table does not have the representations for them. For example, at the beginning of training, the historical embedding table is empty and there may be no representations of partitioned subgraphs.** We thank the reviewer for the question. For historical embeddings, we employ a 0 initialization. Therefore, at the onset of training, if a segment hasn't been updated previously, its representation defaults to a vector of zeroes. **Q2. It is unclear why GST+EFD can outperform GST, given that GST calculates all subgraph embeddings every time, while GST+EFD sometimes uses the stable subgraph embeddings from the embedding table.** We appreciate the reviewer's feedback. Our presumption is that the introduced Staled Embedding Dropout offers supplementary regularization, fostering improved feature representation. Concurrently, Prediction Head Finetuning assists in learning alignment and thus mitigates staleness. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my questions. After reading other reviews and responses, I do not have any more concerns or questions. --- Reply to Comment 1.1.1: Title: Reply by Authors Comment: We appreciate your feedback and are pleased to acknowledge that our responses have successfully addressed your concerns!
Summary: This paper studies an important problem on large graphs. The authors propose a new Graph Segment Training (GST) method for large-scale prediction of properties. The proposed method utilizes a divide-and-conquer approach to allow learning large graph property prediction with a constant memory footprint. GST divides a large graph into segments and then backpropagates through a few segments sampled per training iteration. Extensive experiments demonstrate the effectiveness of proposed method. Strengths: - This work addresses an important problem of property prediction on large graphs, which has applicability in many real-world settings. - The proposed GST framework uses a divide-and-conquer approach to enable large-scale property prediction. - The experiments are well-designed and demonstrate the effectiveness of the proposed framework. Weaknesses: - The paper could benefit from a more detailed discussion of the limitations and potential future directions of the proposed framework. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How efficiency does the proposed framework compare to existing methods for large graph property prediction? - Could the proposed method be used to other types of large graphs, such as social networks or biological networks? - How sensitive is the proposed framework to hyperparameters? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper could benefit from a more comprehensive discussion of the limitations and broader societal impact of the proposed framework. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback and insightful comments. We appreciate the confirmation that the problem we studied is important and experimental design is well-conducted. We hope we are able to address the review’s concerns, and respectfully ask to consider increasing the score. **Q1. How efficiency does the proposed framework compare to existing methods for large graph property prediction?** We thank the reviewer for the question. Our GST framework is tailored to train on large graphs, a task that previous methods struggled with due to memory limitations. This highlights our method's memory efficiency in contrast to earlier approaches. As depicted in Table 3, our GST framework also boasts improved runtime efficiency (3x faster). Importantly, our final algorithm, GST+EFD, introduces only a slight increase in time compared to GST-One. The latter is trained on a singular sampled segment per cycle and can be viewed as a theoretical minimum time benchmark. **Q2. Could the proposed method be used for other types of large graphs, such as social networks or biological networks?** We appreciate the reviewer's question. Indeed, our suggested approach is not limited to any particular type of graph. Most existing social network datasets tend to emphasize node-level or link-level tasks. On the other hand, the publicly available biological networks are generally of a smaller scale. This is the reason we didn’t test on these two types of graphs. **Q3. How sensitive is the proposed framework to hyperparameters?** We thank the reviewer for the question. We conducted an ablation study adjusting some design parameters as shown in Figures 2-4. Our findings indicate that the GST+EFD configuration is notably robust to variations in maximum segment size and partition algorithms. It's worth noting that our optimization parameters remain consistent across all methods and are adopted directly from earlier implementations for Full Graph Training. --- Rebuttal Comment 1.1: Comment: Thanks for your response, I will keep my score.
Summary: This paper deals with large graph learning tasks via graph segmentations. More specifically, in each training step, the author sample nodes from graph segmentations and only update parameters related to the selected nodes. To optimize memory consumption, the author further introduced a historical embedding table. To bridge the training and prediction gap, the author also designed a prediction head fine-tuning scheme. Besides, the author provided some theoretical analysis of the proposed method. Empirical results show the proposed method remains good memory efficiency and test accuracy. Strengths: 1. The paper targets an important problem with high application value, which makes property predictions on Large graphs. The proposed segment training idea looks reasonable and works well practically. 2. Based on the graph segmentation idea, the author has made comprehensive and detailed consideration of the memory usage, the training-testing gap, and the theoretical analysis of the historical approximation bias, which makes the proposed method more technically sound. 3. The paper is well-organized and easy to follow. Weaknesses: 1. The proposed method didn't discuss how to deal with the inter-segmentation training, for example, how to learn a model with link prediction between two nodes from different graph segmentations. 2. About Baselines: The experimental part does not compare with other large-graph learning baselines such as GraphSage[1]. More baselines should be compared to evaluate the effectiveness of proposed GST. Reference: [1] GraphSage: Representation Learning on Large Graphs, NIPS 2017. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How to adapt the proposed method with graph properties learning that are between segmentations? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The proposed method can process large-scale graphs, which might be used for user information mining on large social networks. There are potential risks to user privacy when applying such methods. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback and insightful comments. We appreciate the reviewer for confirming that our paper is technically sound and easy to follow. We respectfully ask the reviewer to consider increasing the score if our clarification has addressed the concerns raised by the reviewer. **Q1. How to learn a model with link prediction between two nodes from different graph segmentations.** We thank the reviewer for the question. We would like to clarify that we use “Graph property prediction” to denote that the entire graph should deserve one prediction (contrast with: node-level, and edge-level, respectively, where each node and edge should receive a prediction). “Graph property prediction” includes “graph classification” (both single label or multi-label setting), “regression” (e.g., in chemical molecules, a graph-level GNN could output continuous values such as “boiling temperature”), ranking, and so on. Explicitly trying to predict properties of a given link connecting two segmentations is essentially a link prediction problem, which we believe is not in the scope of this paper. **Q2. The experimental part does not compare with other large-graph learning baselines such as GraphSage.** We thank the reviewer for the comment. Most of the prior research addressing either node-level or link-level prediction issues fails to directly translate to large-scale graph property prediction tasks. Nevertheless, we did evaluate one baseline - GST-One, in our tests. It operates by training on a single sampled segment per cycle, which, if we adjust to this context, bears notable resemblance to either GraphSAGE or Cluster-GCN. Our findings indicate that this approach resulted in subpar performance.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
A General Framework for Equivariant Neural Networks on Reductive Lie Groups
Accept (poster)
Summary: The paper presents a novel and highly general Equivariant Neural Network (ENN) architecture that is capable of respecting the symmetries of the finite-dimensional representations of any reductive Lie Group G. The proposed approach generalizes the successful ACE and MACE architectures for atomistic point clouds to any data equivariant to a reductive Lie group action. The authors demonstrate the generality and performance of their approach by applying it to two different tasks, namely top quark decay tagging (Lorentz group) and shape recognition (orthogonal group). The results presented in the paper are convincing and showcase the potential of the proposed architecture. Strengths: * The paper is well-organized and well-written, providing a clear overview of the problem and proposed solution. * The paper addresses a significant and challenging problem by proposing a highly general ENN architecture that can handle symmetries of the finite-dimensional representations of any reductive Lie Group G, which contributes to the field of equivariant neural networks. Weaknesses: Related concerns are discussed in the questions section. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * A more comprehensive discussion comparing methods based on Lie Groups, such as LieConv, should be included in the paper. * The results presented in the paper might not adequately convey the effectiveness of the proposed method. In order to reinforce the claims and emphasize the practical applicability of the approach, it would be valuable for the authors to incorporate comparisons of experimental results from methods like MACE and LieConv, using examples such as those from the QM9 dataset. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There are no potential negative societal impacts of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and effort in reviewing our paper. We appreciate that you find our paper well-organized and well-written and addresses a significant and challenging problem. You will find our response to your remarks and questions here. We respectfully hope that our responses will be satisfactory and will increase your mark. ## More in depth discussion of LieConv > A more comprehensive discussion comparing methods based on Lie Groups, > such as LieConv, should be included in the paper. In response to your comment, we have incorporated a new section in the appendix, entitled \"Extended Related Work.\" This section delves into a detailed examination of previous literature, including the LieConv method. It is important to note that our work stands out in providing a more comprehensive framework than any previously developed architecture, both in terms of the diversity of groups covered and the level of expressiveness achieved. To our knowledge, our method represents the first instance of a higher-order equivariant neural network for generic reductive Lie groups. Moreover, the applicability of LieConv is confined to compact groups, as it necessitates the explicit computation of an integral over the group. The convolution in LieConv is limited to considering two-body interactions, whereas our approach, G-MACE, accommodates interactions of arbitrary order. To this day and for the reasons outline above, LieConv is only implemented for a restricted number of groups namely abelian groups and subgroups of the $O(3)$ group. Our method and library covers a much broader range of applications in physics and beyond. To the best of our knowledge, no previous work has provided such a general framework. Convolutional Neural Networks (CNNs), which are translation equivariance, initiated the utilization of data symmetry in machine learning architectures. Throughout time, CNNs have been extended to include other symmetries as well. Central to all these generalizations (including LieConv [2]) is the group averaging operation, $$ \text{Avg}(f)(x) = \int_{g \in G} f(g \cdot x) dg, $$ Where $ x $ denotes the input signal or feature, $f$ is the convolution kernel, $G$ represents the group of interest, and $dg$ is an invariant measure on $G$. This transformation is essential, as it converts any convolution into a group invariant convolution. The feasibility of this approach largely depends on the computational simplicity of the integral. This approach has several limitations, - The direct computation of the integral is unstable and inefficient, even for relatively small groups like $O(3)$. - For non-compact groups, a unique invariant measure is absent, and the integral diverges. - The convolution kernel $f$ is usually constrained to a two-body operator . In the case of compact groups, the integral over the group may be calculated via an alternative means. There exists a linear operator, called the Clebsch Gordan operator $\mathcal{C}$, such that, \begin{equation} \text{Avg}(f)(x) = \mathcal{C}(f)(x) \end{equation} Therefore, the complex integral over the group becomes a linear operation. The central aim of our work is to show that this approach can also be generalized to all reductive Lie groups, even non-compact ones, and provide tools to do so including the basis to expand $f$ and tools to compute $\mathcal{C}$ in this basis. ## More comparaison to LieConv > In order to reinforce the claims and emphasize the practical > applicability of the approach, it would be valuable for the authors to > incorporate comparisons of experimental results from methods like MACE > and LieConv, using examples such as those from the QM9 dataset. MACE is a subset of the presented G-MACE architecture, when the group is the rotation group and the task is on molecular data. Independently to this work, the MACE force field architecture has been recently benchmarked on QM9 and compared to LieConv in \[1\]. Here are selected comparisons relevant to this paper, | | $\textbf{Gap}$ | $\textbf{Homo}$ | $\textbf{Lumo}$ | $C_{V}$ | $\mu$ | $\textbf{ZPVE}$ | | |----------------------|--------------|---------------|---------------|--------------|------------|---------------|--------------| | | meV | meV | meV | cal/mol K | D | meV | $\alpha_0^2$ | $\alpha_0^3$ | meV | meV | meV | meV | | $\textbf{LieConv}$ [2] | 49 | 30 | 25 | 0.038 | 0.032 | 2.28 | 0.800 | 0.084 | 22 | 24 | 19 | 19 | | $\textbf{MACE}$ [1] | 42 | 22 | 19 | 0.021 | 0.015 | 1.23 | 0.210 | 0.038 | 5.5 | 4.7 | 4.1 | 4.1 | From this table, you can observe that MACE far outperforms LieConv, thanks to more expressiveness due to its higher-order interactions. In the paper, we compare G-MACE to an extensive range of methods for the Lorentz group and have now included more baselines for the point-cloud classification. In both of these tasks, G-MACE also achieves state-of-the-art performance. Beyond QM9, the other application benchmark of LieConv is the toy image dataset RotMNIST, corresponding to the group $SO(2)$. This group is implemented in our library. However, as it is both an abelian and compact group, specialized architecture has been constructed for this case that performs very well, including steerable convolution. In the paper, we have preferred to focus on problems in which the point cloud representation is more natural than for images and the group theory more challenging. \[1\] Evaluation of the MACE Force Field Architecture: from Medicinal Chemistry to Materials Science, D.P Kovacs, I. Batatia, E.S. Arany, G. Csanyi \[2\] Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data, M. Finzi, S. Stanton, P. Izmailov, A. G. Wilson --- Rebuttal Comment 1.1: Title: Thanks for your response Comment: I appreciate the additional details provided in the rebuttal, as they have addressed the majority of my questions and concerns. Therefore, I will increase the rating.
Summary: The paper proposes a class of G-equivariant neural networks for reductive lie groups that generalizes the previous ACE and MCAE models (which are designed to be equivariant with respect to the orthogonal group O(3)) to more general irreducible Lie groups. A software library (Lie-NN) has also been developed and released for implementation of the proposed method. The experimental results on jet tagging and 3d point cloud classification support the authors' claims about the equivariant model improving model performance. Strengths: - The proposed approach seems novel; I am not aware of other works that design a G-equivariant network in this way. - The paper is generally well-written and technically correct (although I have not performed an in-depth check for the proof of the universality theorem). Occasional variable definitions are missing but I had no trouble following the general flow of the paper. - The experimental results appear to validate the claimed advantages of the proposed method (although more discussion on the computational aspects is needed to assess practicality; see below). Weaknesses: - Since this work is a generalized version of the previous ACE and MACE methods, a brief review of ACE and MACE would be helpful to understand and evaluate the novelty of the method. The current version is presented in a way that contains elements of previous existing methods, without a clear delineation of the new and original aspects of the proposed method. Exactly what is the precise nature of the extensions to ACE and MACE should be made clearer. - While the technical contents seem correct, some effort at providing intuitive explanation and justifications at key places would be helpful to understanding the paper. Descriptive figures, intuitive examples (e.g., basis including 1-particle basis or Clebsch-Gordan coefficients) come to mind as examples. In particular, I am curious about how to construct a 1-particle basis for specific Lie groups, e.g., SO(3). - The formulation in the case of the product group in Section 4 seems to be missing, although there is a mention of the product group in the introduction and in Section 5. Is it trivial to design an equivariant model for the product group G_1 x G_2 with the formulation used in this method? My initial impression is that there may be some subtleties involved, such as the order of the group actions. - A discussion of the computational aspects of this method is missing. When using this model, the Clebsch-Gordan coefficients must be calculated numerically (as mentioned in Section 5), and it would be helful to mention calculation times and errors. Does the calculation time vary depending on the number of basis components? I am also curious about how much time it takes for iteration on backpropagation compared to classic MLP models. - Continuing with the above comment, since the Clebsch-Gordan coefficients are numerically calculated, some numerical calculation errors are inevitable. Is the equivariance of the model maintained in the presence of numerical errors? If not, some discussion, even qualitative, should be provided (e.g., to what extent the model remains equivariant) although quantitative results (e.g., experimental results, figures) would clearly be preferable. Technical Quality: 3 good Clarity: 3 good Questions for Authors: These have been mentioned for the most part in the weaknesses section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No potential negative societal impact of this work as far as I can tell. As mentioned earlier, computational aspects of the method would be helpful. I don't want the authors to feel compelled to show that this method is immediately computationally practical, as that is not the only measure of the value and worth of any new idea, but it would be helpful to indicate to other researchers on what the computational limitations are, and to possibly spur interest in finding improvements. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your time and effort in reviewing our paper. We sincerely appreciate your thoughtful review. We appreciate that you have found our paper to be novel and well-written. You will find here-after our responses to your questions and remarks, including the changes we made to the paper following your review. ## Clarification on extension of ACE and MACE > Exactly what is the > precise nature of the extensions to ACE and MACE should be made > clearer. There are two aspects to this question: (1) the general architectures; and (2) the generation of the generalized Clebsch-Gordan coefficients. \(1\) The architecture of the models we present here is a generalization of the special case of $O(3)$-equivariance treated in previous works. However, for most researchers in the field, it is unclear whether this extension is possible or how to formulate generalization. The purpose of Section 4 is to make precise that this generality is indeed possible and how it can be achieved. To the best of our knowledge this has not yet been communicated anywhere else. \(2\) To make the general framework from Section 4 practical, the most challenging aspect is generating the symmetrisation operation, i.e., the generalized Clebsch--Gordan coefficients. This is achieved in Section 5. This is technically challenging and novel work leading to new software that has a potentially widespread impact across many application areas, as indicated throughout the manuscript. In response to the question, we lightly edited the list of contributions at the end of Section 1, but in terms of writing style, we prefer to keep it a bit understated. ## Examples of 1-particle basis > Some effort at providing intuitive explanation and justifications at > key places would be helpful to understanding the paper. \[\...\] In > particular, I am curious about how to construct a 1-particle basis for > specific Lie groups, e.g., SO(3). Following your remark, we have added an extensive discussion of the 1-particle basis in the Appendix, giving examples for the $SO(3)$ group and the Lorentz group. We hope this will clarify your questions. ## On product of groups > Is it trivial to design an equivariant model for the product group > $G_1 \times G_2$ with the formulation used in this method? My initial > impression is that there may be some subtleties involved, such as the > order of the group actions. Our library, `lie-nn`, functions at the level of matrix representations, making the integration of product groups a natural step. We now provide a more comprehensive explanation of this in the Appendix (see Product of groups), where we include a practical example of calculating non-trivial invariants for the product groups $O(3) \times S_{n}$. It is worth noting that the direct product of groups is commutative. Specifically, for any two groups $G_{1}$ and $G_{2}$, we have the isomorphism $G_{1} \times G_{2} \cong G_{2} \times G_{1}$. ## Computational cost > A discussion of the computational aspects of this method is missing. > Does the calculation time vary depending on the number of basis > components?... We added a discussion to the Appendix of the paper and a brief sentence in Sec 4.1. to reference that discussion. In the Appendix, we make an analysis of the computational cost of G-MACE as a function of the correlation order and order of expansion in the 1-particle basis, in the case of the Lorentz group. In general, it depends on hyperparameter choices, but especially for larger models the product basis $A_{k}$ is the theoretical and practical bottleneck. *In theory* the product basis can be computed at O(1) cost per feature (see arXiv:2202.04140, in fact it is proven that the cost is asymptotically 2 operations per feature), but the current G-MACE implementation does not leverage this algorithm yet as it requires efficient use of sparse tensors, which is difficult on GPU architectures as a naive implementation would require random memory access. At present the G-MACE code uses a highly performant dense tensor format implementation which we believe achieves within a factor of 3-5 of the hypothetical, optimal performance of a sparse code in the low to moderate correlation order regime that is relevant in applications. The code published with this article generalizes this efficient GPU implementation used in $O(3)$-MACE to handle contractions in any groups. Backpropagation is of comparable cost to inference. The computational kernels we employ are relatively simple. The backward pass is 2-5 times more expansive than the forward pass. Classical MLP models exploit BLAS3 and similar dense tensor operations that have been optimized *ad nauseum* by generations of researchers. Our codes do not yet reach a similar level of peak FLOPs performance and hence the cost of our models will be larger *per parameter*. But note that our physical priors (such as symmetries) usually result in models that are much smaller and more data-efficient. ## Numerical error and generation of Clebsch Gordan > Since the Clebsch-Gordan coefficients are numerically calculated, some > numerical calculation errors are inevitable. It > would be helpful to mention calculation times and errors. While numerical errors are generally inevitable, we use numerically stable solvers for solving the linear systems, thereby achieving machine precision errors. Moreover, for $SU(N)$, the Clebsch-Gordan (CG) coefficients are roots of rational fractions. We employ a rounding scheme designed to round to the nearest rational fractions, reaching exact accuracy in this case. Following your remark regarding calculation times, we have added a section to the Appendix including a comparison of CG generation across various sizes of representations and different groups (see pdf in general response). We would like to underscore that the generation of CGs constitutes a preprocessing step; so, this phase does not affect the model's performance. --- Rebuttal Comment 1.1: Comment: I appreciate the follow-up to my queries; my questions and concerns have been sufficiently addressed. I'm still left with the impression that computational and numerical considerations for this method are important and yet are not mentioned as prominently as they should be in the main body of the paper (only in the appendix). Perhaps there's not much the authors can do about this, since revisions to the main body of the paper are not allowed at this stage. --- Reply to Comment 1.1.1: Comment: We are glad to know that our response was satisfactory to you. We plan to include all the new experiments related to computational and numerical considerations in the main part of the paper. This will be done as soon as we are permitted an additional page for the final version. We hope this plan is satisfactory to you.
Summary: The authors generalize MACE, a point cloud network that uses higher-order interactions via tensor products of basis expansions of the features, to being equivariant to arbitrary reductive Lie groups. The paper shows that this setup inherits universality properties from MACE. A generic method to compute the a basis for the equivariant linear maps between tensor products of the representations is implemented. While the paper is mostly theoretical, the authors show strong performance on a SO(1, 3) equivariant task and a point cloud task. Strengths: - The proposed method is an elegant generalization of a popular prior method - The paper is well-written and mostly easy to read. - The code is included - The proposed method could be a useful turn-key equivariance solution for researchers with a niche symmetry problem. Weaknesses: - Figure 1 of the paper appears to suggest that the method works on E(3), but this can't be a reductive group, as the 4D homogenous representation is not decomposable as a sum of irreducibles (contradicting lines 67-68). I suppose the authors mean to write O(3) and treat the translation by canonicalization, but they should clarity that. - The point could experiment should include more baselines. - As the key contribution of the paper mostly lies in implementing the necessary computations for generic reductive Lie groups (sec 5), it would make sense to allocate more space to that and less to the prior sections, which are mostly already covered in prior works. Alternatively / in addition, it would be good to give more background on the material in section 5 in the appendix. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - It would be helpful to include/repeat the definition of B used in equation (20). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors should clarify better that not all Lie groups of interest are reductive. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your positive review and excellent comments. We appreciate that you find our work interesting and well written. Below we respond to your questions and suggestions. ## Clarification on $E(3)$ > Figure 1 of the paper appears to suggest that the method works on > E(3), but this can't be a reductive group, as the 4D homogenous > representation is not decomposable as a sum of irreducibles > (contradicting lines 67-68). I suppose the authors mean to write O(3) > and treat the translation by canonicalization, but they should clarity > that. We thank you for spotting this typo. As you said, the translation group is not reductive, and we usually use canonicalization to construct invariants. Now we refer to the $O(3)$ group in the figure. One interesting aspect of the translation group is that it is an abelian group. Therefore the irreducible representations are still well understood, and our framework could extend to it. We intend to include representations of such groups in further work. ## Baseline on point clould experiment > Point could experiment should include more baselines. We now compare to other state-of-the-art models for 3D shape recognition in Table 3. We have selected the best models we could find that use point cloud or voxel representations. If we are missing any, we would be happy to reference it and, if appropriate, add it to the benchmark. Please find here a copy of this updated table, | Architecture | **PointMACE** (ours) | **PointNet** | **PointNet ++** | **KCN** | **SO-Net** | **LP-3DCNN**| | --- | --- | --- | --- | --- | --- | --- | | Accuracy | **96.1** | 94.2 | 95.0 | 94.4 | 95.5 | 94.4 | | Representation | Point Cloud | Point cloud | Point cloud | Point Cloud | Point Cloud | Voxel grid | Note that we compare only to other point cloud methods. The best current model we are aware of uses additional information in images at different angles and achieves an accuracy of about 98 %. We believe that this omission is fair. ## Balance in the sections > As the key contribution of the paper mostly lies in implementing the > necessary computations for generic reductive Lie groups (sec 5), it > would make sense to allocate more space to that and less to the prior > sections Following your remark, we have shortened the first sections and added a new subsection to the main text discussing the symmetric powers of representations of reductive Lie groups and how to generate the generalized Clebsch Gordan. Moreover, we have added an extensive (5 pages) discussion of the background of reductive Lie groups, Lie algebras, and GT patterns. Please find that in the new Appendix section A.1. We want to emphasize that we tried to put the most technicalities in the appendix, as we want this paper to be accessible to a broad audience. For such an audience, we think that a certain detail of the general $G$-equivariant cluster expansion techniques is important. In particular, it makes precise how the ideas used in the $O(3)$ case generalize. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: I thank the authors for their response. My score remains unchanged.
Summary: This paper proposes a framework for building equivariant neural networks on reductive lie groups. The proposed method first constructs a linear model for multi-set functions which is then symmetrised to generate a complete basis of equivariant multi-set functions. The model is also extended to a multi-layer architecture by a message-passing scheme. Strengths: * The idea of generalizing ACE and MACE frameworks to arbitrary reductive Lie groups is new. * Proofs are given in the supplemental material to support theoretical claims in the paper. * A library is provided for developing G-equivariant neural networks. Weaknesses: * The paper has some minor errors in writting. For instance, Table 2 caption should be on top of the table. Tables (e.g., Tabs. (2) and (3)) that report experimental results should be mentioned in the text. * Lack of comparison against state-of-the-art methods for validating the effectiveness of the proposed method on 3D shape recognition. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * The computations in Eqs. (7) together with (16) seem to be expensive. How these can be done in practice ? * It would be interesting to show the impact of higher order messages on the performance of G-MACE in terms of accuracy and computation time. I didn't find this study in the paper and supplemental material. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are discussed in the supplemental material. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing our paper. We appreciate that you find our work novel and our open-source library interesting to the community. Below we respond to your questions and suggestions to further improve the paper. ## Formatting of tables > Table 2 caption should be on top of the table. Tables (e.g., Tabs. (2) > and (3)) that report experimental results should be mentioned in the > text. We thank you for spotting these problems. We have fixed the caption and we are now cross-referencing the tables in the text. ## Baseline for 3D shape recognition > Lack of comparison against state-of-the-art methods for validating the > effectiveness of the proposed method on 3D shape recognition. We now compare to other state-of-the-art models for 3D shape recognition in Table 3. We have selected the best models we could find that use point cloud or voxel representations. If we are missing any, we would be happy to reference it and, if appropriate, add it to the benchmark. Here is a copy of this updated table, | Architecture | **PointMACE** (ours) | **PointNet** | **PointNet ++** | **KCN** | **SO-Net** | **LP-3DCNN**| | --- | --- | --- | --- | --- | --- | --- | | Accuracy | **96.1** | 94.2 | 95.0 | 94.4 | 95.5 | 94.4 | | Representation | Point Cloud | Point cloud | Point cloud | Point Cloud | Point Cloud | Voxel grid | Note that we compare only to other point cloud methods. The best current model we know of uses additional information in images at different angles, achieving an accuracy of about $98$ %. We believe that this omission is fair. ## Efficient implementation of Equivariant Product Basis > The computations in Eqs. (7) together with (16) seem to be expensive. > How these can be done in practice ? In general, it depends on hyperparameter choices, but especially for larger models the product basis $\textbf{A}_{\textbf{k}}$ is indeed the theoretical and practical bottleneck. *In theory* the product basis can be computed at O(1) cost per feature (see arXiv:2202.04140, in fact it is proven that the cost is asymptotically two operations per feature), but the current G-MACE implementation does not leverage this algorithm yet as it requires efficient use of sparse tensors, which is difficult on GPU architectures as a naive implementation would require random memory access. At present, the G-MACE code uses a highly performant dense tensor format implementation, which we believe achieves within a factor of 3-5 of the hypothetical, optimal performance of a sparse code, in the low to moderate correlation order regime that is relevant in applications. This renders its cost very attractive. The code published with this article generalizes this efficient implementation used in $O(3)$-MACE to handle contractions in any group. We added a discussion to the appendix of the paper and a brief sentence in Sec 4.1. to reference that discussion. ## Impact of higher order in accuracy and speed > The impact of higher order messages on the performance of G-MACE in > terms of accuracy and computation time. We refer to the previous question: a theoretical optimal evaluation scheme requires only O(1) operations per feature, independent of the correlation order. In practice, the current implementation's cost depends on correlation order. Since all our models seem to be optimal in the range of correlation order 2, 3, 4 (typically 3) and since we are actively working towards the implementation of a a quasi-optimal algorithm, we prefer not to emphasize this too much in the paper. In the Appendix A.13.2, we have added an analysis of the computational cost of G-MACE as a function of the correlation order in the case of the Lorentz group. Please find the figures of this section in the pdf attached to the general response. While in theory, higher correlation means more expressiveness, the relationship between accuracy and the correlation order depends on the dataset. Please find here a table summarizing the accuracy and computational time for different correlation orders on the jet tagging dataset, | **Correlation** | 1 | 2 | 3 | 4| | --- | --- | --- | --- | --- | | **Accuracy** | 93.6 | **94.2** | **94.2** | **94.2** | | **Timings** (ms / jet) | **0.35** | 0.58 | 0.71 | 0.93| In the case of the jet dataset, we see significant improvement going from correlation order one to two and then saturation. In molecular applications, correlation order three is routinely used. It is essential to note that two layers of correlation order three at each layer, give rise to functions of correlation 12. The convergence of the many-body expansion highly depends on data. Low body order is likely enough if the physics behind the data is close to a mean-field limit. --- Rebuttal Comment 1.1: Comment: I thank the authors for their adequate answers. I maintain my original rating.
Rebuttal 1: Rebuttal: We thank all reviewers for their time and effort in reviewing our paper. We are glad you think that our work is "new" (R1) and that our "proposed method is an elegant generalization of previous methods" (R2), addressing "a significant and challenging problem" (R4) of the field of equivariant neural networks. We appreciate that you find our manuscript "well-written" and "well-organized" (R3). In summary, we have updated our manuscript with the following changes. An updated manuscript incorporating those changes will be made available on arxiv within a few days of the deadline. - We have improved the overall clarity of the writing and referencing of results. - We have added a new subsection to section 5 in the main text on symmetric powers of representations and the generation of generalized Clebsch Gordan coefficients for reductive Lie groups. - Fixed the typo in Figure 1. - We added a more extensive number of baselines for the 3D shape recognition task. Here you will find the updated table: | Architecture | **PointMACE** (ours) | **PointNet** | **PointNet ++** | **KCN** | **SO-Net** | **LP-3DCNN**| | --- | --- | --- | --- | --- | --- | --- | | Accuracy | **96.1** | 94.2 | 95.0 | 94.4 | 95.5 | 94.4 | | Representation | Point Cloud | Point cloud | Point cloud | Point Cloud | Point Cloud | Voxel grid | - We added an extensive background on Lie groups and their representation in the Appendix. We expose in more detail the theoretical foundations of our work. In particular, we give a more in-depth summary of the Gelfand-Tsetlin patterns. - We added a new section to the Appendix that gives concrete examples of one particle basis for the case of the $O(3)$ group and the Lorentz group. - We give more details for constructing irreducible representations of the product of groups. We also provide a concrete application of the lie-nn library for computing invariants of the product groups $O(3) \times S_{3}$. - We have added a section in the Appendix on computational cost, both on Clebsch Gordan generation for a wide range of groups and comparing the cost of ablated versions of G-MACE for the Lorentz group. Please find the figures in the pdf attached. - We have incorporated an Extended Related Work section to the Appendix, providing an extended background on methods for constructing equivariant neural networks. This section also helps to clarify our contributions. - Finally, we have published our code on GitHub for complete reproducibility. Below we respond to the reviewers' comments individually. Pdf: /pdf/8ed2125face77e6881782103e87904d8a637183a.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Birth of a Transformer: A Memory Viewpoint
Accept (spotlight)
Summary: The paper studies in detail two-layer transformers and extend the setting of pure associative recall based on data in-context with mixing these tasks with tasks coming from a global birgram model. They then study these two-layer Transformers by freezing layers and probing. Strengths: The paper studies an interesting problem i.e. in-context learning within Transformers, that is believed to be an important characteristic of language models in general. Based on recent work, they aim to mechanistically interpret / reverse engineer the two layer Transformer which in my opinion is an interesting direction to better understand and get intuition on how Transformers work. I like the mixed bigram data setup that the authors propose and study and I share the opinion of the authors that the tension between storing knowledge and quick adaptation / learning from in-context is very interesting and not explored. Progress in this direction is important. Weaknesses: It feels that the paper is very rushed. I think therefore that the presentation can be vastly improved as well as results and analyses refined. Furthermore, I think that the proposed method of freezing the vast majority of layers in the network leads to very biased results and therefore results which might not hold when training on all weights. A couple of design decisions seem also quite arbitrary and it would be I think the job of the paper to justify and investigate them. See Questions. Therefore, I think that the paper is very promising but can and should be improved. Minor things / Language: Key, Query, Value Matrix must not be square and the positional encodings are usually not randomly initialized. Please at least comment on these design decisions. Also you do not use layer norm, I would also expect at least a comment on this. Please use LaTex in your Figures, quite hard to read the legends etc (maybe different background color as well?). The sum in Figure 2 over the positional encodings should be over t? Please describe more precisely Figure 3&4 in the caption. These are not concise i.e. it was not possible for me to understand what you are plotting what the Figures show. Also please increase the font size. At least one interesting citation is missing: https://arxiv.org/abs/2212.07677 The paper studies in details 1) single and two layer Transformer and provide evidence about copying in the first layer and 2) in-context learning in the second layer by gradient descent. This is, I believe, equivalent to the Hebbian-rule / associative recall with orthogonal inputs. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: In general I think the papers focus on the tension between memory and in-context learning is great. Nevertheless, I think that a couple of very interesting experiments, ablations and analyses about design decisions are missing. 1) Can you provide analyses in both extremes i.e. when there are only "triggers" i.e. when all sequences can be learned by quick associative learning and the other way around. This should lead to quite different circuits that can be contrasted against each other. 2) Why do you include a single (linear) feedforward layer in the architecture? The problem should be solvable without (as studied in the induction head paper). If you want to include this layer, then I think the right thing to do would be to include it also after the second layer. 3) Why are you so heavily relying on the freezing of some layers? I actually ran your experiments (with only triggers) and I believe that my matrices do actually show very different behavior. I think that the freezing of the layers, forces you to focus on the superposition too much. If you wouldn't freeze your vocabulary and the input matrix, the model could (and I see this in my experiments) actually produce embeddings which leave token memory free to use for layer computations i.e. essentially concatenate x_i = w_E(z_i) + w_P(i) \approx [z_i, p_i] since the positional encoding is basically alternating 0 and 1 after some dimension. By that, the circuits and mechanism that you find could be very different and therefore not stable wrt. to design decisions. 4) Building on 3): I am quite convinced that all design decisions will change the circuits and mechanism of how the two layers solve your problem. Therefore I think it is very interesting and actually needed to investigate these and make suggestion what these design decisions control. E.g. Add or concat frozen/learnable random/sinusoidal PEs, control the dimension of the vocab and the token dimension, include feedforward layer, etc. 5) The theoretical insights on Learning Dynamics seem a bit vague and less supported by empirical study (didnt look in the appendix). As I understand, in-context learning kicks in later in training, so why study on the random init? At least comment on this shortcoming, if I understood this correctly. Minor things: 5) Can you please better your probing experiments, how exactly do you do that? Which vectors are probed? 6) What do you mean with discrete tokens line 81? 7) Equation 7 - its confusing for me why exactly these weights recover the induction head mechanism. Please elobatore, I can somehow see it but please make this more rigorous. 8) Please explain a bit better your data generation and the statistics. How large is the possibility of conflicts i.e. a -> based on global knowledge and -> based on in-context. 9) Figure 4 left. Why cant the in-context learning loss go to zero? Is it because of the conflicts? Please make the Figures a bit less crowded, hard to see all the lines. Would be great to have scaling plots wrt to K imo. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: I dont think the limitations of design decisions are well discussed or investigated. See Questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed review and for the interest to try out our code! We hope our response can provide more perspective about the motivations behind our work, and may help you reconsider your score. *freezing layers* See our general response. In particular, we chose to simplify the architecture as much as possible to have a simpler setup for a clear study. We also found empirically that training all parameters, even when adding layer-norm and MLPs, still leads to correct memory recall associations. *Minor Things* * "Key, Query, Value...": with a single head, V needs to be squared, but we agree that Q/K need not be squared. We will include the case of separate and potentially non-square Q/K matrices, which still can lead to the desired memories for the product $W_Q^\top W_K$. * positional encodings: to our knowledge, when using learned positional encodings, these are randomly initialized. * layer-norm: see general response (it still works!) * Figures: thanks for your suggestions! We will improve this in the next version * sum in Figure 2: we use the variable $s$ instead of $t$ (as used in Eq. 7) in the sum to avoid confusion with the $t$ of the tokens below. *Missing reference* Thanks for pointing out this very relevant paper! We will cite it and compare to it in the next version. The mechanism they describe is indeed similar to ours, and to induction heads in general. However, their construction relies on idealized choices of weights such as the identity matrix (see A.1 in their paper), which could easily be quite different from the weights learned by training, particularly in a setup with discrete data like ours. In contrast, our work (i) provides a precise description of what the weights look like with gradient updates, (ii) verifies this form empirically through memory probes, and (iii) shows theoretically that gradient dynamics on our bigram task indeed recovers these associative memories. *"..analyses in both extremes..."* Our model already covers all cases ($K=0$ means no triggers, $K=N$ means all tokens are triggers). For $K=0$, the model just doesn't learn the induction mechanism since the gradients to the weights of the attention blocks are essentially dominated by noise. For $K \geq 1$, the same induction head mechanism is learned as long as triggers appear in the data (which is why our experiments consider triggers that are frequent tokens). The only difference is that $W_K^2$ in eq.7 only learns associations for trigger tokens that appear in the data. This is described at the end of Section 4.2, but we will clarify it further. *"..single (linear) feedforward layer..."* You are right that global bigrams may be learned without feedforward layers (as in the induction head work), however that would require learning embeddings. With fixed random embeddings, it is much harder to encode global bigrams without feed-forward layers (see response to hHJQ). We prefer encoding this in a linear feedforward layer because its training fits the associative memory viewpoint, as for all internal weight matrices, and might thus capture how other feed-forward layers may store global knowledge in larger networks. Input/output embedding layers likely exhibit quite different learning dynamics that go beyond the message of our work. As a side note, even when training all parameters including embeddings, we observed that the KL probe for $W_F$ still decreases quite quickly, which suggests that gradient dynamics may prefer storying global bigrams in the feed-forward layer as opposed to embedding layers. *"Why are you so heavily relying on the freezing... I actually ran your experiments ..."* Thank you for taking the time to run our code! We tried training all layers including embeddings, and found that the recall probes actually reach perfect recall accuracy (see general response), even when using only triggers (i.e. using the option `--data_args.k 65`). Thus, we believe the model still learns the same mechanism. If however you are using fixed sinusoidal positional encodings, then the first layer might end up learning something different than eq. 7 for $W_K^1$, since you no longer have the near-orthogonality behavior, and in fact you may get a previous-token head behavior without the need to store outer products for all t thanks to the sinusoidal structure (but the other layers still do get behave like eq. 7). This relies on specific embedding structures that depart from our memory viewpoint based on near-orthogonality, which is why we did not include sinusoidal PEs in our paper, but we agree it is an interesting direction! *"The theoretical insights..." The detailed analysis of how the induction head mechanism emerges is deferred to Appendix B.3 as it is quite technically involved. We instead focused on the key ingredient behind the proofs, which is simpler to state, and is about how gradient updates on a lot of data can filter out irrelevant parts elements of input superpositions. Regarding "later in training": we show that if you learn sequentially $W_O^2$, then $W_K^2$, then $W_K^1$, each with a single gradient step from its initialization and on enough data, then you *do* learn the in-context learning/induction head mechanism (i.e., there is no shortcoming). The key point is "enough data" and also what kind of data: learning the global bigrams is typically very fast because there is a lot of such bigrams and the signal is quite "clean" and accessible from the current token's residual stream. In contrast, for the induction head, say, to learn $W_O^2$, this requires finding signal among *all tokens* seen so far, which is much harder and requires more data (in addition to the fact that trigger-output pairs are often less frequent overall compared to global bigrams). We'd be happy to include these intuitions in the paper, if it is accepted. Thank you for your other comments and suggestions, we will happily incorporate them in the updated version. --- Rebuttal Comment 1.1: Title: Thank you Comment: Thank you for your thorough and thoughtful response. I will increase my rather low score accordingly. Thanks again --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you for your message and for increasing your score, we appreciate it!
Summary: This paper provides a detailed analysis of how in-context learning behavior emerges in a simplified version of the transformer architecture on a toy task. This work can provide important insight into how in-context learning emerges in LLMs. The toy task is this: given a sequence of tokens of the form $\ldots, a, b, \ldots, a$, predict $b$. The token $a$ is a "trigger" token. If token $b$ (the "output" token) comes after the first occurrence of $a$, then the model must predict $b$ after the second occurrence of $a$. The model is expected to use in-context learning to predict the correct $b$. The rest of the sequence follows a bigram language model distribution. The authors zoom in on two aspects of the model: the "induction head" mechanism that implements in-context learning, and the "associative memory" mechanism in the key/query weight matrices that allows the model to learn the global bigram distribution. They give mathematical justification for the emergence of both mechanisms in their simplified transformer model trained on infinite data. They provide experimental results that provide evidence that these mechanisms are learned in a particular order, and that learnability depends on attributes of the training data distribution, such as the number of trigger tokens per sequence, or whether the trigger token types are randomized. Strengths: This paper provides a very useful case study of how in-context learning behavior emerges in transformers. In-context learning is a hot topic, and it is important to understand architecturally how this behavior emerges, as well as the conditions that are conducive to it. This paper is a good step towards better understanding this phenomenon. I think their bigram task is an appropriate choice of case study. The authors analyze two mechanisms and provide mathematical and empirical justification for both. Originality: Good. I think the choice of task is very useful for the purposes of studying in-context learning. Quality: The mathematical analysis and experimental design appear to be sound. Clarity: The paper seems to be well-contextualized with respect to previous work. Significance: High. After reading this paper, I feel I have a better grasp on how in-context learning works in the transformer architecture, and this has important implications for any NLP applications that use transformers. Weaknesses: **Edit:** I have read the rebuttal, and it addressed my biggest concerns. I have two major criticisms of this paper, which is why I did not immediately assign it a higher score: 1. The transformer architecture used in this paper is drastically simplified from real transformers used in LLMs. There is no layer normalization or dropout, and the transformer only has 2 layers. They use linear layers instead of feedforward layers. The key and value matrices are merged into one matrix. Many of the parameters are frozen during experiments and during gradient analysis. I think the paper requires a much more detailed, readable discussion justifying why the authors expect their analysis on this simplified transformer model to carry over to real transformers. I think there is some discussion scattered throughout the paper, but if so, I think it needs to be organized more clearly. I think the results are useful regardless of these simplifying assumptions, but the authors need to talk about this more. 2. Clarity. This was a very difficult paper to read, and this negatively impacted my ability to interpret the results. As I explain in more detail in the Questions section, a recurring issue I had while reading this paper was that information is frequently presented in *reverse* order; I often needed to read several lines, paragraphs, or sections ahead in order to clarify something I was confused about. I think the whole paper would benefit from a round of editing that alleviates these issues. See the Questions section for more details. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Most important questions: 1. 77, 90, 93, 97: Does the decision to ignore layer normalization make the findings of this paper less applicable to real transformers and LLMs? Same question for using only a single head. What about dropout? Same question about using linear transformations instead of feed-forward networks. 1. Fig 1: I had a very hard time understanding this figure, and it's essential for understanding the paper. I don't know what Attn1 and Attn2 are supposed to signify. In Layer 1 column 2, $w_1(a)$ and $w_E(b)$ aren't the only values in superposition there, right? It would also have $p_t$. It's not possible to completely isolate $w_E(b)$ from $p_t$, because they are in superposition in Layer 0, right? I think it would really help to give a symbolic representation of the query, key, and value functions at each layer. If I understand correctly, the inputs at Layer 1 are essentially $(x_t, t)$, where $x_t$ is the input token at timestep $t$. Then $\mathrm{query}(x_t, t) = t-1$ (assuming you can compute this with a linear layer), $\mathrm{key}(x_t, t) = t$, and $\mathrm{value}(x_t, t) = x_t$. This adds $x_{t-1}$ to the inputs of Layer 2. Then in Layer 2, $\mathrm{query}(x_t, t, x_{t-1}) = x_t$, $\mathrm{key}(x_t, t, x_{t-1}) = x_{t-1}$, and $\mathrm{value}(x_t, t, x_{t-1}) = x_t$. Then if $x_T = x_{t-1}$, this lets you predict $x_t$. Is this right? How do you ensure that this only applies when $x_T = a$, and not other token types in the vocabulary? 1. Lemma 1: Could you include a proof sketch in the main text? Could you include a longer discussion of the limitations of your simplifying assumptions in Lemma 1 and how they apply to real transformers? The main problem with this section is that I cannot verify that Lemma 1 is true, and that it applies to real transformers in the way you claim that it does. Won't it totally change the dynamics if the other weights are not fixed? I think it's probably a useful result either way, but you should talk about this more. 1. 197: Could you justify this decision a bit more? How do we know this doesn't drastically change the situation from real transformers? 1. 308: Can you include a proof sketch for Lemma 2? Other questions: 1. Can you clarify which key-value associations you expect the linear layers to learn? Is it the global bigram distribution? 1. 51: If all the examples are generated by a single bigram language model, isn't all of that "global" knowledge? That is, how will you know if the model isn't just memorizing the training data? Are you counting on the fact that the transformer isn't big enough to memorize so many examples? Do you test on out-of-distribution examples that verify that this is the case? 1. 98: What about tied embeddings? 1. 199: I think this assumption is pretty reasonable. But I think it's harder to understand how the induction head would work without a separate query matrix. Could you spend a little time explaining how this would affect the construction? 1. In general it would be very helpful to explain in detail how to design an induction head under these constraints. 1. 208: Again, it would be useful to discuss how W_F would implement this. 1. 219: How? 1. 220: Where do $p_t$ and $p_{t-1}$ come from? Are they sinusoidal positional encodings? 1. Eq 9: Can you explain in words what this is doing? 1. Fig 5: Why is it easier to learn 5 random q than 1 random q? It would be helpful to be reminded that K is the number of triggers per sequence, not the number of token types used as triggers overall. 1. 310: What is $E[x|y = k]$? What is $\hat{p}_W$? Clarity issues: 1. 43: At this point, I was wondering if you would supplement your empirical analysis of the training dynamics with a mathematical explanation. It would be useful to point out earlier that you will do this. 1. Eq 1: Having $x_t$ on both sides of the equation is confusing. I'm not a big fan of the $:=$ notation. 1. How would the induction head mechanism work in the presence of multiple instances of a in the past context? 1. 120: These sentences seem out of order and are hard to read. It's not clear if $b$ is a variable or just a tag to distinguish $\pi_b$. I don't understand what $\pi_o$ is. I was confused by the difference between $\pi_b$ and $\pi_o$. 1. 123: What is the variable $n$? 1. 124: How is $K$ chosen? Is it randomly sampled or set to a constant value for the experiments? 1. Using i and j as variables for token types instead of input positions is confusing, especially since k is used as an index 1. 126: What is the first token in the sequence? How is that sampled? 1. 126: It would be helpful to provide an intuitive explanation in words of the process that this equation implements. 1. Where is $\pi_u$ used? (I see it is used at 131, but this seems out of order.) 1. 129: What is the tiny Shakespeare dataset? 1. 133: Referring to training details in Section 5 out of order makes this part hard to read. Could you move the training details earlier? 1. 134: Is this on the training data, or on a held-out test set? Do you use a validation set? 1. 134: It would be easier to read this results if they were in a table. 1. Fig 2: The labels for the axes on the first plot are missing. It would be helpful to label the y axes. For the left plot, which version of the dataset was this trained on? Is there a reason you can't show the first layer for both models? I don't understand the significance of the red and green boxes. What are the "previous occurrences of the query"? It would help to highlight the triggers and outputs in the axis labels. 1. 134: At this point I was confused about the significance of using fixed vs. random triggers. But after re-reading 124-126, I understand that for random triggers, the set of triggers is randomly sampled for every sequence, so *every* token type has the potential to be a trigger. The only way they are distinguished as triggers in the training data is that the same bigram appears twice in the same sequence. But this could also happen by chance for non-trigger tokens, right? It would be helpful to add a discussion of this to the main text. 1. 153: What does this notation mean? 1. 159: What does O mean? 1. Eq 5: What does z range over? 1. I didn't realize until Eq 6 that $w_E(z)$ means the $z$th column of $E$. Could you define this notation beforehand? 1. 213: I see that this section answers some of my questions above. It would be useful perhaps to introduce or mention this earlier in some way. 1. 238: Just to clarify, you're using a cross-entropy language modeling loss function, right? I see this is partly answered at 254. 1. 247: In the equation for $W_{*}$, what are $v_j$ and $u_i$? 1. Eq 9: What is $M$? How are the $(i, j)$ picked? 1. 307: This seems to answer one of my earlier questions. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: As mentioned above, I think the authors should discuss the limitations of their simplifying assumptions about the transformer architecture more. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the very detailed and encouraging review. *"The transformer architecture used in this paper is drastically simplified ... the paper requires a much more detailed, readable discussion ..."* Thanks for raising this point. We hope that our general response to all reviewers provides useful elements to address this concern. In particular, we view simplicity as beneficial to illustrate our model. We also show that various aspects of our model can easily be extended to more complex architectures, although multi-head attention and multiple layers may lead to identifiability issues which make a clean analysis more challenging. We will do our best to provide a clear discussion of these points in the paper, and will add an appendix with additional experiments and technicalities for extending to more standard architectures. *"clarity"* We will do our best to improve clarity and address the points you raise in the questions. *layer-norm, multi-head, MLPs* Please see our general response, which addresses these. We will include results on these in the appendix. *dropout* We did not consider dropout as it is often not used in modern LLMs (see, e.g., the PaLM paper), but leave its study as an interesting question for future work. *Figure 1* We apologize for the lack of clarity with this Figure, and will improve this in the updated version. Attn1 and Att2 refer to the desired associative memories for the product $W_Q^\top W_K$ (or just $W_K$ in the simplified model with $W_Q = I$) at each layer. You should think of key-query associations as considering two tokens at different positions (instead of separately at each position as you write) and checking if there is a match. E.g. for the first layer, we want $(x_t + p_t)^\top W_Q^\top W_K (x_s + p_s) \approx 1$ for $s = t-1$ and zero otherwise, which is the case for the given choice of associative memory (see also eq.7). We will do our best to better represent the full superpositions at each layer, instead of just two elements. *Lemma 1* We will add a brief proof sketch and add a pointer to the right appendix with the proof. The form of the gradient is valid regardless of the current values of the input/output embeddings, so it could potentially apply at any time during training. In particular, if somehow the embeddings stop moving because they converged to a good place, then the weight matrix will end up learning an associative memory with these new embeddings. In practice (see above) we saw that associations are still correct when training embeddings. This could mean the embeddings converge quickly to something stable, and the weight matrices use those final directions to store associations. *197 (why freeze layers)* See our general response. Basically our motivation was to get the simplest possible model for which we know what the solution should look like ("identifiable"), in order to have a clean and concise description of training dynamics, both empirically and theoretically. Also, if a good "lazy" solution exists where some of the weights don't need to move, it will likely be an easier solution for GD to find! For attention matrices, this still works in our theory as well, as mentioned in the general response, and we will include these results in the appendix. *"..key-value associations you expect the linear layers.."* Indeed, we expect $W_F$ to learn the global bigrams (see eq.8 and the comments below it), at least initially. This would be the case approximately if the two attention layers were not present, at least for non-trigger tokens (following Lemma 1). Our hypothesis for why this also holds initially with attention layers is that all elements in the input superposition to $W_F$ that are not the current token will be mostly noise, since all previous tokens are independent of the next token given the current token (assuming we're on a non-trigger token), so they will get filtered out in the update with enough data (as in Lemma 2). *"global knowledge"* While it is possible that a transformer with very large MLP layers could memorize all sequences in order to guess the correct output token after a trigger, this would likely be intractable since the number of such possible sequences grows exponentially with sequence length. Yet our model gets near-perfect prediction accuracy on tokens even near the end of the sequence, suggesting it uses a better approach. The empirical study also shows that the model is making use of the induction mechanism for predictions. We also tried replacing output tokens at test time, and saw correct predictions (though this generally only works for outputs seen during training, in fact if a token isn't used as an output, we see that the corresponding outer product simply isn't stored in $W_O^2$, see our response to uvp3 for more on this). *other questions* * tied embeddings: these also work in our formalism. * L199: it is only the product $W_Q^\top W_K$ that affects predictions, which is why we preserve the same expressivity by fixing one to the identity. * 208: this is shown in eq. (8) * L219 "how?": thanks to near-orthogonality, any embedding in the superposition that does not appear in the outer products gets filtered out * L220: $p_t = w_P(t)$ by definition, and is fixed at random initialization * eq 9, fig 5: we will clarify these. $K$ is indeed the total number of tokens used as triggers (see L124). Larger $K$ makes it easier to learn the induction head because there is a lot more data involving trigger-output pairs. The values of K used in experiments, if not shown, are given in Appendix D. * L310: we will clarify ($E[x|y=k]$ is the conditional expectation of $x$ given $y = k$, and $\hat p_W$ are softmax model predictions for parameter $W$ respectively) Thank you for the many other comments and suggestions regarding clarity, we will happily incorporate them and will do our best to improve clarity in the updated version of the paper, if it is accepted. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for your response. Weakness 1, MIQ1, MIQ4, OQ3, OQ4: I do agree that simplicity is beneficial for analysis, but the simplified version needs to be tethered to unsimplified transformers in some way, or else the findings of the paper are no longer relevant. I still believe the paper needs at minimum a longer, focused discussion of each of these limitations and how the simplified and unsimplified versions connect. The additional empirical results you have mentioned in the rebuttal will be helpful in arguing this. MIQ2: How close was my original interpretation? And there is no limit to the number of vectors that can be in superposition, not 2 as suggested in the figure, right? MIQ3: Some experimental results demonstrating that the embeddings first converge quickly to an optimum would be helpful here, and in addressing Weakness 1. OQ1: > While it is possible that a transformer with very large MLP layers could memorize all sequences in order to guess the correct output token after a trigger, this would likely be intractable since the number of such possible sequences grows exponentially with sequence length. This is a good point and worth mentioning in the paper. OQ3: Can you explain this in more detail? OQ6: Thanks. This ties into Weakness 2. OQ7: Thanks, this is worth mentioning in the paper. OQ8: Ok, somehow I wasn't able to find this definition in the paper. --- Reply to Comment 1.1.1: Comment: Thank you for your comments. Regarding the links between our simplified architecture and the more standard transformer, we agree that the main paper requires more discussion (in addition to the new results that we'll including in the appendix), and we are planning to add a discussion at the end of Section 4. **MIQ2** What you described in your initial review is indeed a good interpretation of what's going on. What you write as $\text{query}(x_t, t) = t-1$ can be translated into our associative memory viewpoint as the following constraint on $W_Q$: $W_Q(x_t + p_t) \approx p_{t-1}$. This naturally leads to the desired associations when $W_K$ is the identity matrix: $(x_t + p_t)^\top W_Q^\top W_K (x_s + p_s) \approx p_{t-1}^\top W_K (x_s + p_s) \approx p_{t-1}^\top p_s$, which is non-negligible and close to one only when $s = t - 1$, as desired. A small difference with our model in Eq.7 is that we use $\text{key}(x_t, t) = t+1$ and $\text{query}(x_t, t) = t$, i.e., the roles of key and query are swapped compared to what you wrote, but this leads to the same associations. One thing worth mentioning that seems to be missing in your description is the "remapping" done by the first value layer: it is important to use $\text{value}(x_t, t) = w_1(x_t)$ (using notations from Figure 1) instead of just $x_t$, where $w_1(x) = W_O W_V x$ essentially remaps the embedding $x$ to a new, orthogonal embedding. This ensures that the second layer attention head matches $x_T$ with tokens $x_t$ whose *previous remapped token* $w_1(x_{t-1})$ matches with $x_T$, while without this remapping, it could just as well match tokens that are themselves the same as $x_T$. Regarding the question about ensuring the mechanism only applies to certain tokens: when only certain tokens act as triggers (as in the "fixed trigger" setup), this can be achieved by only storing associations for the relevant tokens in the second key/query memories (see the expression for $W_K^2$ in Eq. 7: the summation is only over $k \in Q$, i.e., the set of triggers). For the case of "random triggers", as we discuss at the end of Section 4 (L233-237), the induction head may be active for all tokens, and the model seems to sometimes prefer using it over the global bigram model, particularly if its output is not a frequent next token according to global bigrams (this should indicate that the current token may be more likely to be a trigger in the current sentence). Finally, you are correct that there can be many more vectors in superposition. If the attention heads are sparse and only select one token, then we may expect the superpositions after layer $\ell$ to have $O(\ell)$ elements (O() is the [big O notation](https://en.wikipedia.org/wiki/Big_O_notation) which we use throughout the paper). If the attention is more spread out, we may expect more elements, for instance at initialization, the attention is near-uniform, so that we would have the average of all token embeddings in the sequence already at the second layer. **MIQ3** Thanks for the suggestion, we will include plots of gradient norms, which seem to decrease quickly for the embed/unembed layers, indicating that these layers do not move much later in training. **OQ3** The associative memories we consider rely on two sets of near-orthonormal embeddings $(u_i)$ and $(v_j)$, where the former are inputs and the latter outputs of some matrix (see Eq. 3). Nothing stops these two sets of embeddings to be the same, and setting $w_U(k) = w_E(k)$ for all $k$ provides an example of this in the context of Lemma 1. This is precisely the case of tied embeddings. In practice we observed that using tied embeddings tends to slow down training in our setup, possibly because of the additional correlations it induces, and more importantly because it reduces the overall number of near-orthogonal directions by identifying input and output embeddings, e.g., the output of $W_O^2$ according to eq. (7) would now be an input embedding that may be confused with the embedding of the current token in the residual stream. Nevertheless, it is possible that this weight sharing is beneficial in some tasks for larger models. **OQ 1, 6, 7, 8** We will clarify these. For positional embeddings, these are defined in Eq. (1), and we will consider dropping the overloaded notation $w_P(t)$, which isn't currently used much, and sticking to the $p_t$ notation throughout the paper.
Summary: Given the blackbox that large language models are, this paper tries to use a small simplified view of a 2 layer transformer model, and a synthetic task to understand how global and in-context language statistics are learned by the transformer model. By freezing specific layers, The authors show how the memory recall and in-context accuracy varies. Strengths: The paper presents a nice setup with the simplified transformer model, and the synthetic task for analysis of transformer architecture. The current analysis are great, but just the start and the foundation laid could be useful for future explorations. The proposal around how the global memory and the in-context induction work is supported (partially) through the experiments by freezing various layers. With the experiments, they are able to show that the global bigram statistics are learned faster and the induction heads for in-context bigrams are developed in subsequent training steps. Additionally, the conclusion that having diverse training distribution leads to better generalization is great. So this is a good started to understanding the transformer LLM blackboxes, and the results may apply on not just text, but other modalities as well. Weaknesses: - The results are still very preliminary. The transformer model is simplified, and so is the dataset. It is unclear whether a larger transformer model would use a similar paradigm for learning. Essentially, it might be possible that the results do not extend to larger models and additional experiment needs to be done to verify that. - Transformer models have a lot of other components, and it would be nice to study how they impact the memory of these models. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - In the multi-head attention, was any experiment perfomed to understand whether for a fixed model dimension, is it better to have larger number of heads or smaller number and how that affects memorization? - Was there any experiment performed where we just have attention (no feed forward layer), and how that affects global memorization? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Discussions could mention that this analysis may not extend to larger versions of the model, or provide empirical proof that it does. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the helpful review and for the positive assessment of our work. *“The results are still very preliminary…” "Transformer models have a lot of other components..."* This is the main topic of our general response post, which we hope provides a valid justification of our approach, as well as as new results in this direction. In summary, we chose a simple setup in order to provide a more accessible and intuitive description of the weight matrices and their learning dynamics. That said, the “memory” viewpoint for the weights, as well as the insights on training dynamics, extend naturally to more complex architectures, since they are basically a consequence of the fact that inputs and outputs of a weight matrix are embeddings. We will include new experiments and theory justifying this, as presented in the general response. The main difficulty with more complex tasks and architectures is that different mechanisms can be implemented in different layers/heads, making them harder to identify and track. Tackling this in well-chosen or general problems is an important future direction. *"In the multi-head attention..."* We did not experiment extensively with multi-head attention, since a single head at each layer is sufficient for the induction head mechanism needed for our task. Nevertheless, we ran basic experiments using multiple heads on our task, and observed that the associations made by the second-layer attention tends to distribute across different heads (i.e. different tokens are handled by different heads), while the previous-token mechanism at the first layer seemed to use only one of the heads. *"Was there any experiment performed where we just have attention..."* Thanks for the suggestion. We tried an attention-only model, and noticed that the loss on global tokens decreases more slowly than with the feed-forward layer, and ends up at a higher loss number (but note that training embedding layers could circumvent this). For fixed triggers, the task becomes a bit easier and looking at attention maps for the second layer shows that the attention usually focuses on the current token, except for the fixed triggers which need the induction mechanism. Thanks again for your encouraging review. Please let us know if you have any other questions of concerns. --- Rebuttal 2: Comment: Thanks for the clarifications! I have read through the response and will keep the original scores. --- Rebuttal Comment 2.1: Comment: Thank you for keeping your high score of 8!
Summary: This paper studies the dynamics of how induction heads emerge in LLMs during training. The authors describe a simple synthetic task to test their hypotheses on, outline a plausible implementation of an induction head based on associative memory, and describe both empirical and theoretical observations. Strengths: * The paper is well-written and generally easy to follow. Motivations, conclusions, and analysis are crystal clear. * The authors study an interesting question: what mechanism do LLMs implement induction heads with, and how might it emerge during training? * The methodology is clean and nice! I like the inclusion of "memory recall probes," as they are a direct measure of what you're claiming. The results are also appropriately framed in the context of the merits & limitations of the experimental setup. Weaknesses: * Can you better explain the significance of the results? My interpretation is that you framed the $W$ matrices as associative memories and found an order in which they seem to be learned. I'd love to hear what you think the "so what" of this is! What interesting things can we do with this understanding? What more can we learn? Can we make the induction heads better? Implement them manually? Do you expect the learnings here to help us understand larger, more complex models? * It's sort of implied that the framing of induction heads as associative memory is new. What are the other competing mental frameworks? How does this one compare? * It'd be interesting to see experiments that vary the dimensionality of the vectors/matrices. In my experience, superposition is rampant, and LLMs often try to stuff a lot into a not-so-large $d$. How do you expect the findings to change when you can't assume near-orthonormality? Other than a degradation in performance? Minor things: * Figure 1 is nice! It'd suggest labeling the words in the caption with colors or letters, so it's immediately clear what refers to what. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: * Did you check that the actual weights learned by your toy model match the solution you made in any meaningful way? Maybe exactly, modulo a simple transformation or something? Did you find any interesting surprises? * Did you have a chance to test any hypotheses about superposition experimentally, even preliminary results? If so, what did you find? Any unexpected things? Negative results ok! I'd imagine it's quite relevant, and is a better representation of what really happens in transformers. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the encouraging review and helpful suggestions. We are glad you found our work interesting, well written, and that you liked our methodology! *“Can you better explain the significance of the results? …”* Thanks for asking this, it is definitely something we should have discussed more in the submission -- we will comment on it more in the next version. At its core, we hope our work can provide a new language to reason about the internals of transformers, and how they are affected by learning dynamics. Here are examples of areas where it could be useful: * architecture/algorithms: improve optimization algorithms and architectures/initialization by studying how they impact the form of the learned memories * choice of pre-training data: induction heads can be learned more quickly when [trigger, output] pairs are more frequent and more diverse -- perhaps this could extend to more general “reasoning” problems: does synthetic and diverse logic data help? is this why training on code seems useful? * interpretability: presumably, knowing the structure of weight matrices can allow a more fine-grained understanding of what each transformer block is doing * fine-tuning vs model editing: if gradient updates are just adding/reweighting outer products to our memories, can we do fine-tuning in a more controlled and targeted way by manually changing appropriate weights? *“Do you expect the learnings here to help us understand larger, more complex models?”* Yes! The nice thing about this model is that it just assumes that the inputs and outputs of a weight matrix are embeddings (or superpositions thereof), which is basically true anywhere in a transformer other than for embedding/unembedding layers. The main difficulty is redundancy: many different parts of a large model may implement similar mechanisms, so that they may become harder to pin-point and identify during training. This is what motivated us to simplify the architecture into an identifiable model, and similar modifications may work on more complex tasks, but we expect that more general problems will require more work on the interpretability/identification side of things. See our general response for more discussion on this, and additional results in this direction. *“It's sort of implied that the framing of induction heads as associative memory is new…”* Thanks for the question. A perhaps more natural model, which we were initially inclined to believe when we started this work, would be that weight matrices found by training were a bit more “idealized”: for instance, copying could be done with an identity value matrix at the second layer. While recovering matrices like the identity could happen in some settings, we found that in general it is difficult to obtain. One thing we tried in order to see why this intuition fails is to use two different sets of output tokens at training vs test time. Even with tied input/output embeddings (so that the identity approach might actually work on the test tokens), the accuracy on test output tokens is much worse than on train tokens, particularly when the dimension is large [in low dimension, the outer products could span the entire space and actually give something close to the identity, but the overall accuracies are then lower since the embeddings are “less” orthogonal]. We’d be happy to include this discussion in the appendix. *“…experiments that vary the dimensionality of the vectors/matrices…”* Figure 7 in the current appendix shows some basic results in this direction, and we’ll include more experiments that study the effect of d and sample size for the one-gradient-step scenario. In practice, we see that higher d definitely helps store things “more quickly” (in terms of iterations and amount of data), but even d=64 is sufficient for our setup. Regarding superposition, you’re right that there are typically lots of things in the residual stream, but we would argue that d is quite large in LLMs (at least in the thousands), and you can have lots of near-orthogonal embeddings (in fact, exponential in d). The dimension of attention heads is usually much smaller, but we expect that the Q/K matrices filter out a lot of irrelevant embeddings from the residual streams (using a similar process to what we describe in sec. 6), so that the low-d vectors contain just one or a handful of directions. *“Figure 1 is nice! It'd suggest labeling the words”* Thanks for the suggestion! We will change this in the final version if the paper is accepted. *“Did you check that the actual weights…”* Based on your suggestion, we looked a more granular maps illustrating $(v_j^\top M u_i)_{ij}$ for a desired memory M, instead of just the recall metrics. These do indeed show that the desired associations have much larger values than the remaining items, though we also notice that different input tokens may lead to different magnitudes, especially for $W_K^2$, reflecting different frequencies of triggers in the data. *“Did you have a chance to test any hypotheses about superposition..”* Thanks for the suggestion. In our controlled setup where embeddings are just nearly-orthonormal random vectors (including "remapped" random vectors), it is quite easy to check which embeddings are present in a given representation/superposition, by just taking the inner product with each such embedding. Of course this detection becomes more difficult/noisy when there are many elements (e.g. for the initial average attention in a long sequence), but we found that it works reliably for a handful of elements in large enough dimension. We'd be happy to include this in the appendix. Again, many thanks for your comments and suggestions. We hope these clarifications and improvements may help increase your score. Please let us know if you have any more questions or concerns. --- Rebuttal Comment 1.1: Comment: Dear reviewer uvp3, thanks again for the helpful comments and suggestions. As the discussion period nears its end, please do let us know if you have any additional questions or concerns with the paper. We'd be happy to address them. --the authors
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their insightful feedback and valuable comments. We are happy that most reviewers found our work relevant and significant. Indeed, we believe that our insights on the internals of transformers can pave the way for improved methods in several aspects of LLMs, including better optimization algorithms, data selection, fine-tuning, model editing, and interpretability. We provide a response below to a shared concern regarding our simplified setup, and respond to individual reviewers in separate replies. **Why a simplified architecture** The simplified architecture compared to common transformers/LLMs was a concern for multiple reviews. Our goal was to simplify the model as much as possible to ease the understanding of what is happening, while ensuring that the model is still rich enough to capture the relevant phenomena to solve the task. It would definitely be much more cumbersome to illustrate the memory viewpoint, and theoretically study training dynamics on a model where many more components are trained. We hope the simplicity of our architecture can help provide better intuition for what we believe to be a key internal mechanism in all transformer models. **More components** In line with the reviewers' suggestions, we ran additional experiments to check if we obtain similar mechanisms with more trained components. In particular, we trained a similar two-layer model with the following modifications: * train all parameters (including input/output embeddings and all four attention matrices at both layers) * use a ReLU MLP feed-forward layer at the second layer (instead of linear) * add pre-normalization layers We found that despite this added complexity, the "memory probes" described in Section 5 still display the same associations empirically. Concretely, we replace $W_K^\ell$ by $W_Q^{\ell\top} W_K^\ell$ in eq.(7), and~$W_F x$ by $MLP(x)$ for the feed-forward probe (note that these can still be defined even if the parameters are changing). The three recall probes still converge to 1, while the KL probe on the feed-forward layer decreases. This shows that this more realistic model still identifies the same mechanisms empirically. On the theory side, we can also easily show that gradient steps on each attention matrix from random initialization still recovers similar associations despite the redundancies (in the pairs Q/K and V/O). Additionally, we can show that layer-norm and MLPs preserve the associative memory form of the weights: * Adding layer-norm essentially adds a projection operation to the rank-one terms in the gradients. The projection essentially drops outer product terms for which the association was already present. In particular, it plays no significant role at random initialization, thus does not change our single-gradient-step analyses, but is likely important to study optimization stability. * For the MLPs weights, the outer product terms in gradients involve non-linear mappings of superpositions, which may more easily capture interactions between different elements of the superpositions. We'd be happy to include these additional experiments and technical details in the appendix, if the paper is accepted. **More heads and layers** While the experiments above show some robustness to common complexities, the single-head, two-layer architecture remains very simplified compared to large models. Unfortunately, understanding more complex architectures with multiple heads and more layers is more challenging since there is a lot of redundancy and it is unclear which layer or head may be implementing different mechanisms. For instance, we tried using 4 heads instead of one in the two-layer model, and the second layer induction mechanism appears to distribute across all the heads, each head taking care of different tokens (though interestingly the first layer previous-token mechanism seems to only use a single head). This is in contrast to the single-head induction head mechanism we describe, which becomes identifiable (i.e. we know what each matrix ends up doing) thanks to the reduced redundancies. Extending our work to general models and tasks will thus require more work--in the spirit of interpretability--to first identify which parts of the model are implementing different mechanisms, before understanding dynamics. Nonetheless, our results suggest that within each block, gradient dynamics naturally lead to weight matrices that may be interpreted as associative memories. We hope this post clarifies our motivations for the simple model, and we will do our best to make this discussion clearer in the paper.
NeurIPS_2023_submissions_huggingface
2,023
Summary: Authors perform an in-depth study of the toy case of learning associate recall task using causal Transformers with the aim to understand the emergence of in-context learning abilities during training. Informally, they propose a modified bigram distribution where after sampling a sequence, for a special set $Q$ of pre-determined "trigger" tokens, at every occurance of a token $q$ from $Q$, they replace its following token with the token that followed $q$ at its first occurrence in the sequence. I.e. $[...q\ r ....q\ s] \mapsto [...q\ r ....q\ r]$. Hence, for $q$, the model is required to determine $r$ first from the context and, at every following occurance of $q$ output $r$. The authors prove that a simplified 2-layer Transformer (the only non-linearity is softmax) can learn this task via gradient descent and verify this empirically. Strengths: 1. The problem is well-motivated as large language models demonstrate impressive in-context learning abilities and hence it is beneficial to understand how this ability develops. 2. The theoretical statements are non-trivial, interesting and not straightforward to prove. Weaknesses: 1. The associative recall task has been studied in the context of in-context learning of attention-based models before and its not fully clear what novel contributions the authors have made. 2. Given the simplicity of the toy task/model the practical implications and the generality of the findings are unclear (other tasks / architectures). Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: . Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: There is no discussion of the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thoughtful review. *“The associative recall task has been studied… novel contributions”.* Indeed, several works have looked at similar tasks. Nevertheless, to our knowledge we are the first to have a precise picture of (pre-)training dynamics and as a result, a precise understanding of the form of the weights, in a way that extends to multiple layers. The structure of our bigram data model we introduce is also simple enough that it is amenable to theoretical analysis. Overall, we believe that our study provides many new insights about the internal structure of transformers during pre-training, and can pave the way for new improvements, e.g. for optimization algorithms, data selection, fine-tuning, model editing, and interpretability. *“…practical implications and the generality of the findings are unclear...”* Indeed, this work is a first step. That said, our associative memory viewpoint for weights naturally extends to more complex architectures since gradients will take similar forms, and we chose a simple task precisely to make the viewpoint more transparent and understandable. We also simplified the architecture in order to make the role of each component identifiable, while for a more complex model, there may be many different solutions (e.g. the previous token head could happen in many different layers and attention heads). Other tasks are definitely interesting but might involve very different mechanisms which we believe will require separate studies, yet, copying mechanisms like ours are likely crucial in many different tasks (as the induction head papers illustrate), and enable more complex reasoning operations, e.g. with a semantic hierarchy that may be learned in multiple layers. We expand on this point much more in the general response to all reviewers, and there we discuss additional experiments/theory that shows our viewpoint extends to more complex settings. Please let us know if this response helps with your assessment of the paper, we’d be happy to answer any additional questions. --- Rebuttal Comment 1.1: Title: response 1 to rebuttal Comment: Thank you for addressing some of my concerns - after going through your responses to my review (and to other reviews) I am increasing the score. --- Reply to Comment 1.1.1: Title: Thank you! Comment: Thank you for the increase in score! We appreciate this.
null
null
null
null
null
null
Learning via Wasserstein-Based High Probability Generalisation Bounds
Accept (poster)
Summary: This paper focuses on studying PAC-Bayes generalization bounds based on the Wasserstein distance. It introduces novel high-probability bounds applicable to both batch learning (i.i.d. setting) and online learning (non-i.i.d. setting). Unlike previous PAC-Bayes bounds, their bounds are not limited to bounded or sub-Gaussian losses. Furthermore, leveraging their theoretical findings, the authors propose a new SRM training strategy and provide empirical evidence to validate its effectiveness. Strengths: Compared to previous results, the Wasserstein-distance PAC-Bayes bounds presented in this paper require fewer or weaker assumptions. Additionally, unlike previous PAC-Bayes bounds that typically invoke Gaussian distributions, their bounds accommodate discrete Dirac distributions for both the prior and posterior distributions. Weaknesses: Regarding the Wasserstein regularization part, it might be necessary to empirically compare it with similar styles of regularization, such as weight decay $||w||$ and the 'distance to initialization' $||w-w_0||$. Although I believe that a data-dependent prior will provide a tighter bound, it is uncertain whether the data-dependent prior can significantly outperform a data-independent prior (e.g., $0, w_0$) considering the significantly increased computational time required. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Major concerns: 1. There might be a small error in Theorem 2: in the proof of Theorem 2 (Section B.2), you let $\lambda_i=\sqrt{\frac{\ln{(K/\delta)}}{|S_i|}}$ in the last step (i.e. Line 636). However, this choice does not yield the desired $\sum_{i=1}^K\sqrt{\frac{|S_i|\ln{(K/\delta)}}{m^2}}$ in your bound. If I understand correctly, you could consider setting $\lambda_i=\sqrt{\frac{2\ln{(K/\delta)}}{|S_i|}}$, which would result in the last term in the bound becoming $\sum_{i=1}^K\sqrt{\frac{2|S_i|\ln{(K/\delta)}}{m^2}}$. This may be the smallest achievable second term in Section B.2. 2. In Line 206-207, you argue that Eq. (3) contains a trade-off parameter $\lambda$, while Eq. (2) does not. However, it is worth noting that a similar parameter, $\epsilon$, is also involved in your practical training objective, namely Eq. (5). Therefore, the argument regarding the presence of $\lambda$ may not be entirely fair. Perhaps removing Line 206-207 could be considered? 3. Have you considered incorporating the empirical variance term into your algorithm? Since you emphasize its reliance on the prior rather than the posterior, I would appreciate it if you could demonstrate some practical implications. 4. Under the same conditions, such as bounded loss or even zero-one loss, is it possible to compare your Wasserstein-based PAC-Bayes bounds with the standard KL-based PAC-Bayes bounds, especially those that also include variance terms as mentioned in [1,2,3]? Minor comments: 1. Line 131: there is a missing period after $S_K$. 2. RHS of the equation on Line 633: $L{|S_i|}\Longrightarrow L_{|S_i|}$. 3. RHS of the equation on Line 635: $\lambda [S_i|]\Longrightarrow \lambda |S_i|$. [1] Seldin, Yevgeny, et al. "PAC-Bayesian inequalities for martingales." IEEE Transactions on Information Theory 58.12 (2012): 7086-7093. [2] Tolstikhin, Ilya O., and Yevgeny Seldin. "PAC-Bayes-empirical-Bernstein inequality." NeurIPS 2013. [3] Mhammedi, Zakaria, Peter Grünwald, and Benjamin Guedj. "PAC-Bayes un-expected Bernstein inequality." NeurIPS 2019. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: There is no potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your careful review, and we answer your concerns below. **Weaknesses** As noted in the general answer, the 'distance to initialisation' is a particular case of our Eq. (5) when $K=1$. This particular case has been plotted in Table 3 of Appendix C.4. More precisely, this table shows that, on most of the datasets, considering data-dependent priors leads to sharper results. This shows the efficiency of our method compared to the 'distance to initialisation' regularisation. Furthermore, we ran a novel experiment with the weight decay generalisation which is gathered in the pdf document updated with the rebuttal. This experiment shows concretely that on a few datasets (namely SENSORLESS, YEAST), when our predictors are neural networks, the weight decay regularisation fails to learn while our method succeeds, as shown in Tables 1 (of our work) and 4 (of the new pdf). More generally, weight decay regularisation outperforms our method 3 times over 11 for batch learning with neural nets. We hope the experiments and the analysis address your concerns. **Major concerns** 1. Thank you for spotting this, you are actually right. Our choice of $\lambda$ was not correct, while yours is. We will correct it in the next version of the document. 2. Thank you again; this remark is indeed unfair. After looking at our practical optimisation procedure, we will remove those two lines. 3. According to our theoretical bounds, it is legitimate to incorporate the empirical variance term when considering a heavy-tailed loss that takes value in all $\mathbb{R}$. This situation is in line, for instance, with reinforcement learning with heavy-tailed rewards, as noticed in line 155. However, we proved in Theorems 2 and 4 that if the loss is nonnegative, then the empirical variance does not intervene in the bound. Thus, as our optimisation goal is driven by the theory, it does not seem relevant to us to incorporate such an empirical variance in a classification problem. However, in order to provide a concrete situation where order 2 moments on the prior only have a positive influence (rather than on the pair prior/posterior) we propose the following example. Assume that order 2 moments of the prior *and all considered posteriors* have to be uniformly bounded by a certain $C$. Let the data distribution of a point $z$ to admit only a finite variance and consider the loss $\ell(h,z)= |h-z|$ where both $h,z$ lie in the real axis. Then, assuming $\mathbb{E}[z^2]\le V$, then by Cauchy Schwarz, we have $\mathbb{E}[\ell(h,z)^2] \le \mathbb{E}[h^2] + 2V\mathbb{E}[|h|] +V^2 \le C$, the last inequality being our assumption. Then, this invites us to consider posterior distributions with first and second moments satisfying the previous inequality, which is restrictive. On the contrary, assuming only such an assumption on the prior distribution, we are allowed to move freely on the space of distribution to find the best posterior. We thank you for asking for more details, as this short example will improve the next version of our manuscript. 4. We provide below further elements of comparison between [1,2,3] and our results, beyond the fact that we used the Wasserstein distance instead of KL divergence. The bounds of [1] are PAC-Bayes bounds for martingales, we focus on their Theorem 4 which controls an average of martingales, similarly to our Thm 1. Under a boundedness assumption, they recover a McAllester-typed bound while Thm 1 is more of a Catoni-typed result. Also, their Theorem 7 is a Catoni-typed bound involving a conditional variance, similar to our Theorem 4. However, they require to bound uniformly the variance on all the predictor sets, which is more restrictive than the assumption of bounding the averaged variance with respect to priors, which is what we required to perform Theorem 4. The bounds of [2] have been obtained for bounded loss functions and iid data; this allows us to exploit techniques involving the 'small' kl divergence between two Bernoulli r.v. Thus, their bounds have been obtained through a certain lower bound on this kl term. Their main result (Theorem 4) contains a term of variance times a KL divergence, exhibiting an interaction between those two quantities. On the contrary, our bounds decoupled the influence of the empirical variance from the one from the complexity term (i.e., the Wasserstein distance). The major advantage to this is that we do not have to assume a constraint on our choice of posterior, such as the assumption below Eq. (7) in [2]. This allows our learning algorithms to move freely in the space of distributions to find our posterior. The bounds of [3] are a continuation of [2] as they provide tighter PAC-Bayes Bernstein inequalities. However, we notice that the involved technical toolbox is closer to ours as they widely used Exponential Stochastic Inequalities (ESI). Indeed, our supermartingale toolbox can be seen as a carefully designed ESI. However, their results remain valid for bounded losses and i.i.d data, which remains more restrictive than our assumptions. We hope this clarifies your concerns and are happy to respond to any additional questions. --- Rebuttal Comment 1.1: Title: Thanks Comment: Thank you for your response! All my questions are clarified. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging that your concerns are clarified and for your review.
Summary: This paper introduces Wasserstein-based PAC-Bayes bounds both for offline (batch learning) and online learning. These bounds are obtained with a clever use of the Kantorovich-Rubinstein duality together with a control of some additional terms appearing in the proof that only depend on the prior. To use the Kantorovich-Rubinstein duality they require the loss to be Lipschitz and to control the additional terms they require the loss to have a bounded second moment, following the techniques from [HG23a]. The extension from offline to online learning comes almost "for free" in the analysis due to the super-martingale strategies developed in [HG23a]. This way, the presented bounds have a set of weak assumptions (Lipschitzess + bounded second moment) which makes them amenable to losses with potentially heavy tails. Another aspect of the bounds is that they allow data-dependent priors. For this, they separate the training dataset $\mathcal{S}$ into $K$ subsets $\mathcal{S}\_i$and dedicate $K$ data-dependent priors $\pi\_i$ that depend on $\mathcal{S} \setminus \mathcal{S}\_i$. This way, they have an average of Wasserstein distances of the posterior and each of these priors weighted accordingly. This way, for a large number of samples the posterior and the priors start to align helping the bound to become tighter. Finally, they employ their bounds to derive regularization terms that are employed to design algorithms both in offline and online learning. They showcase how the usage of these algorithms does not decrease the performance of linear and deep models and that, in some cases, they help these models achieve better performance. Strengths: The strengths of this paper are multiple, but the most important are: - The presented theorems are general (Lipschitz and second-moment assumptions only). - The proofs of the presented theorems are exposed in a simple way, even though the final results are useful. - The theorems allow for deterministic predictors, which is one of the biggest drawbacks of PAC-Bayes bounds. - The results are validated empirically. Weaknesses: There are not many major issues with the paper. My biggest concern is the following: - In the section "Batch algorithm" starting in line 275, you use the equation (5) to guide the training of the posterior, which includes the distance $\lVert \textbf{w} - \textbf{w}_i \rVert_2$ that comes from the Wasserstein distance. However, the employed loss is only Lipschitz with respect to the outputs. How do you justify then the usage of the $\ell$_2 norm of the weights, as it could be that the loss is not Lipschitz with respect to the weights? The same happens later in equations (6) and (7). Other smaller issues are the following: - The experimental results don't seem too strong compared with previous methods such as the one presented in [PORPH+21]. Although they use probabilistic networks, it would have been interesting to observe how the two methods compare to each other with the same network architecture. For instance, with a FCN and Gaussian priors, they show population risk certificates of ~0.02, while the presented here are of ~0.09 and ~0.12. - Some of the writing can be clarified. In particular, I am referring to sentences of the type: "for all $i$, $\mathcal{S}$,$\pi\_i(S,\cdot)$ is $\mathcal{F}\_{i-1}$ measurable". This is not very clear, I would recommend changing all these sentences for something like "the distribution $\pi\_i(S,\cdot)$ is $\mathcal{F}\_{i-1}$ measurable for all $i$ and all $\mathcal{S}$". This small change can improve readability. - I missed some comments on Proposition 5 of [AEM22], which is a Seeger-type bound. How do the presented bounds compare to that? - I am not sure the implementation of OGD is correct, given that it ends up having a larger empirical risk than the regularized version. Maybe I am missing something here, or maybe this showcased some limitations of OGD I was not aware of. I also mentioned it explicitly as a question below. Some typos: - In line 29, it should be $R\_{\mu}(h) - \hat{R}\_{\mathcal{S}}(h)$. - In lines 192 and 193, you forgot to include the hat in $\hat{R}$. - In line 354, it should be $\mathfrak{C}\_{\mathcal{S}}$. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * How do you justify the usage of the $\ell_2$ norm on the weights in equations (5), (6), and (7) if the employed loss is only Lipschitz with respect to the outputs of the hypotheses $h_{\textbf{w}}$ but not necessarily with respect to the weights themselves? * What is the intuition for the fact that Algorithm 2 has a smaller empirical risk than OGD? It seems weird given the fact that Algorithm 2 is essentially a regularized version of OGD, which is an algorithm for ERM. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Some of the limitations are mentioned throughout the text, yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive review and for pointing out the 'multiple strengths'. **Your major concern** Thank you for spotting this - see also our general response. A Wasserstein is defined on a Polish space w.r.t a distance $d$. Indeed our Algos in Eqs. (5-7) must be described w.r.t $d$ and not an Euclidean norm on the weights, we fixed this in the revision. When the predictor space $\cal{H}$ is parametrised by a Euclidean space with Euclidean distance $||.||$, the distance between two predictors $h\_{\bf w},h\_{{\bf w}'}\in \cal{H}^2$ is $d(h\_{\bf w},h\_{{\bf w}'})=||{\bf w}-{\bf w}'||$. This raises the question of transferring the Lipschitzness on $h_{\bf w}$ to one on ${\bf w}$. Lemma 8 ensures this is true for the linear model, but we omitted to prove it was true for our neural nets when weights are bounded. We prove below a new lemma showing that with clipped weights in a Fully Connected Network (FCN) with Lipschitz activation functions, then our loss is Lipschitz w.r.t the weights and we can use the $\ell_2$ norm in our objective. Note that the clipping constant plays no role so we set it to a large value to avoid interfering with optimisation. This has been added to Sec 4.1 and we hope this addresses your concern. **Minor** 1. Discrepancy in performance: this is because we do not minimise the same objective (they use the cross-entropy loss). 2. Thanks - added to the revised paper. 3. We do not compare to Prop. 5 of [AEM22] but to their Thms 11-12, which are versions of Prop. 5 for Wasserstein. See paragraph starting on l196. Our proof does not allow the use of the 'small' kl divergence as we need $\sum_{i=1}^m R_{\mu}(h)-\ell(h,{\bf z}_i)$ to apply our supermartingale trick. Note that our result holds with weaker assumptions than Thms 11-12. **Questions** 1. The L2 norm in experiments is motivated by Lemma 8 and the new lemma. In both cases we prove the loss is Lipschitz w.r.t the output $h_{{\bf w}}$ and ${\bf w}$. 2. For each datum, OGD is performing only one gradient descent step while our method solves a (constrained) optimization problem. Hence, we believe that OGD might suffer from underfitting as one gradient step at each time may not be enough for a non-regularised method. Since Rev. tL5n has the same concern, we decided to add the answer to the paper after line 355. **Lemma's statement** We define recursively a FCN as follows: for a vector ${\bf W}\_1= \text{vect}(\{W\_1,b\})$ and an input datum $x$, $FCN\_1({\bf W}\_1,x)=\sigma\_1(W\_1x+b\_1)$, where $\sigma\_1$ is the activation function. Also, for any $i\geq 2$ we define for a vector ${\bf W}\_i=(W\_i,b\_i,{\bf W}\_{i-1})$ (defined recursively as well), $FCN\_i({\bf W}\_i,x)=\sigma\_i(W\_iFCN\_{i-1}({\bf W}\_{i-1},x)+b\_i)$. Then, setting ${\bf z}=(x,y)$ a datum and $h\_i(x):=FCN\_i({\bf W}\_{i},x)$ we can rewrite our loss as a function of $({\bf W}\_i,{\bf z})$. **Lemma.** *Assume that all weight matrices of ${\bf W}\_i$ are bounded and that the activation functions are Lipschitz with constant bounded by $K\_\sigma$. Then for any datum ${\bf z}=(x,y)$, any $i,{\bf W}\_i\rightarrow\ell({\bf W\_i},{\bf z})$ is Lipschitz.* *Proof.* We consider the Frobenius norm on matrices as ${\bf W}_2$ is a vector as we consider the L2-norm on the vector. We prove the result for $i=2$, assuming it is true for $i=1$. We then explain how this proof generalises the case $i=1$ and works recursively. Let ${\bf z},{\bf W}_2,{\bf W}_2'$, for clarity we write $FCN_2(x):=FCN({\bf W}_2,x)$ and $FCN_2'(x):=FCN({\bf W}_2',x)$. As $\ell$ is $\eta$-Lipschitz on the outputs $FCN_2(x), FCN_2'(x)$: $$|\ell({\bf W\_2},{\bf z})-\ell({\bf W'\_2},{\bf z})|$$ $$\le\eta||FCN\_2(x)-FCN\_2'(x)||$$ $$\le\eta||\sigma\_2(W\_2FCN\_{1}(x)+b\_2)-\sigma\_2(W\_2'FCN\_{1}'(x)+b\_2')||$$ $$\le\eta K\_\sigma||W\_2FCN\_{1}(x)+b\_2-W\_2'FCN\_{1}'(x)-b\_2'||$$ $$ \le\eta K\_\sigma( ||(W\_2- W\_2')FCN\_{1}(x)||+||W\_2'(FCN\_{1}(x)-\text{FCN\}_{1}'(x))||+||b\_2-b\_2'||)$$ Then we have $||(W\_2-W\_2')FCN\_1(x)||\le||(W\_2-W\_2')||\_F||FCN\_1(x)|| \le K\_x ||(W\_2- W\_2')||\_F$. The second inequality holding as $FCN\_1(x)$ is a continuous function of the weights. Indeed, as on a compact space, a continuous function reaches its maximum, then its norm is bounded by a certain $K\_x$. Also, as the weights are bounded, any weight matrix has its norm bounded by a certain $K\_{W}$ thus $||W\_2'(FCN\_1(x)FCN\_1'(x)|| \le ||W\_2'||\_F||(FCN\_1(x)-FCN\_1'(x)||\le K\_{W}||FCN\_1(x)-FCN\_1'(x)||$. Finally, taking $K\_{temp}=\eta K\_\sigma\max(K\_{x},K\_W,1)$ gives: $$|\ell({\bf W\_2},{\bf z})-\ell({\bf W'\_2},{\bf z})|\le K\_{temp}(||(W\_2-W\_2')||\_F+||b\_2-b\_2'||+||FCN\_1(x)-FCN\_1'(x)||).$$ Exploiting the recursive assumption that $FCN\_1$ is Lipschitz with respect to its weights ${\bf W}\_1$ gives $||FCN\_1(x)-FCN\_1'(x)||\le K\_1||{\bf W}\_1-{\bf W}\_1'||$. If we denote by $(W_2,b_2)$ the vector of all concatenated weights, notice that $||(W\_2- W\_2')||\_F+||b\_2-b\_2'||=\sqrt{(||(W\_2-W\_2')||\_F+||b\_2-b\_2'||)^2}$$\le\sqrt{2(||(W\_2-W\_2')||\_F^2+||b\_2-b\_2'||^2)}=\sqrt{2}||(W_2,b_2)-(W_2',b_2')||$ (we used that for any real numbers $a,b,(a+b)^2\le 2(a^2+b^2)$). We then have: $$|\ell({\bf W\_2}, {\bf z})-\ell({\bf W'\_2},{\bf z})|$$ $$\le K\_{temp}\max(\sqrt{2},K\_1)( ||(W\_2,b\_2)-(W\_2',b\_2')||+||{\bf W}\_1-{\bf W}\_1'||)$$ $$\le\sqrt{2}K_{temp}\max(\sqrt{2},K_1)||{\bf W}_2-{\bf W}_2'||.$$ The last line holds by reusing the same calculation trick. This concludes the proof for $i=2$. For $i=1$, the same proof holds by replacing $W_2,b_2,FCN_2$ by $W_1,b_1,FCN_1$ and replacing $FCN_1(x), FCN_1'(x)$ by $x$ (we then do not need to assume a recursive Lipschitz behaviour). Then the result holds for $i=1$. We then apply a recursive argument at rank $i$ by assuming the result at rank $i-1$ reusing the same proof by replacing $W\_2,b\_2,FCN\_2$ by $W\_i,b\_i,FCN\_i$ and $FCN\_1(x),FCN\_1'(x)$ by $FCN\_{i-1}(x),FCN\_{i-1}'(x)$. This concludes the proof. --- Rebuttal Comment 1.1: Title: Answer to rebuttal Comment: Thank you for your rebuttal. My major concern is now addressed with the new Lemma, thanks. This only begs the question on how to choose the hyper-parameter $\varepsilon$ with respect to that Lipschitz constant or, in other words, how much the bound is used for regularization. But you already discuss that in the paragraph after (5), so a small further discussion will do. Also, all my minor concerns and questions are clarified. (P.S.: I missed that in [AEM22, Prop. 5] they needed that $m \cdot \Delta\_S^{\mathrm{kl}} \in \mathcal{F}\_S$ for some reason. Then, I thought one could choose $\mathcal{F}\_S$ to be the set of Lipschitz functions. Sorry for that.). --- Reply to Comment 1.1.1: Comment: Thank you for answering to the rebuttal and your review.
Summary: This work propose PAC-Bayesian learning with the KL divergence replaced by the Wasserstein distance on the metric space of hypotheses $(\mathcal{H}, d)$. Denoting the data distribution by $\mathcal{D}$, the authors study the following learning problems: 1. Batch learning. Under the assumption that the loss function has bounded 2nd moment with respect to $\mathcal{D}$ and priors over $\mathcal{H}$, the authors derive a generalization bound in terms of $LW$, where $W$ is the Wasserstein distance between posterior and prior distributions over $\mathcal{H}$ and $L$ is the Lipschitz constant of the loss function (with respect to $d$). 2. Online learning. Under the assumption that the loss function has bounded 2nd _conditional_ moment with respect to $\mathcal{D}$ and priors over $\mathcal{H}$, the authors derive a similar generalization bound as above, where the population risk is defined by the sum of conditional expections of the loss, given a filtration adapted to the sequence of data. Both bounds can be applied to heavy-tailed loss functions with bounded 2nd moment. And they can be used to derive new learning problems regularized by $LW$. Experimental results with a linear model and an NN model show that these algorithms have better test risks than ERM and OGD on several dataset, among which online learning with the NN model shows the most improvement. Strengths: - This is a strong result in the field of PAC-Bayes learning. Replacing the KL divergence by the Wasserstein distance is logical when the geometry of the underlying metric space is in consideration. - The bound is just as nice as the KL divergence version of the PAC-Bayes bound. And the ability to convert the bound into the algorithm is powerful. - The experimental results show that the bound can be used to improve models' generalization. Weaknesses: - It seems that the batch size substantially affects the test risks. For example, learning with $\epsilon = 1/m$ on MNIST and TICTACTOE with the linear model, and TICTACTOE on the NN model. Are there any specific reasons that make these models underperform on these datasets? - Alg 2. seemingly performs as well as OGD with the linear model; on the other hand, the former does noticeably better with the NN model. The authors might want to discuss on why the NN model performs well in this case. - A minor comment on the organization: In Section 4.1: Batch algorithm, the authors set $L=1/2$ and proceed to define a specific loss that has the said Lipschitz constant, which is quite confusing to read. I think it is more logical to introduce the loss function first. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Though not required, but I think high level ideas of the proofs of Theorem 1 and 3 would be a nice addition. - In Section 4.1: Batch algorithm, has the definition of $d(h,h_i)$ been defined before? I guess the authors have make a convention that $d(h,h_i)=d(w,w_i)$. - Can the authors performs the experiment against ERM with standard regularization as well? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors might want to add some limitation of the bounds. For example, it might be difficult to compute the Wasserstein distance over nonparametric models, such as kNN. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your enthusiastic review. We are thrilled to read that you see this work as a 'strong result in the field of PAC-Bayes learning'. We answer your questions below. **About the batch size** This is indeed a fair question, and unfortunately at this stage we found no particular reason why $\varepsilon$ affects the test risks. For a given dataset, there is no obvious choice for the model and the parameters $\varepsilon$ and $K$. For instance, in Table 3 of Appendix C, for the SEGMENTATION dataset, the parameters $K=1,\varepsilon=\frac{1}{m}$ are optimal (in terms of test risks) for both models. As $K=1$ means that our single prior is data-free, this shows that the intrinsic structure of SEGMENTATION makes it less sensitive to both the information contained in the prior ($K=1$ meaning data-free prior) and the place of the prior itself ($\varepsilon=1/m$ meaning that we give less weight to the regularisation within our optimisation procedure). On the contrary, in Table 1, the YEAST dataset performs significantly better when $\varepsilon=1/\sqrt{m}$ (and $K=0.2\sqrt{m}$), exhibiting a positive impact of our data-dependent priors. We will add a discussion on this matter in the next version. **Performance of Alg.2** As stated in lines 353 to 355, we believe that OGD can suffer from underfitting. Indeed, only one gradient descent step is done for each new datum $(\mathbf{x}\_i, y\_i)$, which might not be sufficient to decrease the loss. Instead, our method solves the problem associated with Eq. (7) and constrains the descent with the norm $||\mathbf{w}-\mathbf{w}\_{i-1}||$. In the linear case, we are able to solve the problem, but for the neural network case, we minimise the approximation for 10 steps. As the question was also raised by reviewer mMbV, this answer will be added in the next revision of the paper after line 355. **About the organisation of Sec. 4.1** Thank you for notifying this; we acknowledge in our general answer that the beginning of Sec. 4.1 would benefit of more clarity. More precisely, we will move the second paragraph of the 'Batch algorithm' part before Eq. (5). We will also define properly $d$ on $\mathcal{H}$ to make explicit the convention $d(h\_{\mathbf{w}},h\_{\mathbf{w}'})=||\mathbf{w}-\mathbf{w}'||$ and will replace the Euclidean norm by the distance $d$ in Eqs. (5), (6), (7). **High-level ideas of the proof** Due to space constraints, we did not extend the theoretical discussion as much as we wanted. However, we are happy to put more discussion about both the high-level ideas of the proof and our set of assumptions in the next version. **Additional experiments** Yes, we will provide additional experiments. As written in the general answer, we implemented in Table 3 of Appendix C.4 the particular case $K=1$ of Algorithm 1, which corresponds to the classical 'distance to initialisation' regularisation (i.e., $||\mathbf{w}-\mathbf{w}\_0||$ with $\mathbf{w}\_0$ being the initialisation. It shows that in most cases, taking $K$ greater than 1 (i.e., fully exploiting the theoretical ground offered by our PAC-Bayes bounds) leads to better results in a vast majority of datasets. Furthermore, as it was asked by Rev. jPqk to run an ERM with the 'weight decay' regularisation (i.e, $||\mathbf{w}||$), this has been added in the pdf document linked to the general answer. This experiment shows concretely that on a few datasets (namely sensorless and yeast) when our predictors are neural nets, the weight decay regularisation fails to learn while ours succeeds as shown in Tables 1 (of our work) and 4 (of the new pdf). More generally, when considering neural nets, the weight decay regularisation outperforms our method 3 times over 11 for batch learning. Again, we thank you for your review and hope our answers address all of your concerns. --- Rebuttal Comment 1.1: Title: Thank you Comment: I thank the authors for addressing all of my concerns. There is an open problem of finding a suitable batch size left which is interesting in its own right. --- Reply to Comment 1.1.1: Comment: Thank you for your reply and again for your (positive) review. Indeed, the choice of the batch size has been made empirically in our paper. This suggests an important open question which we are happy to mention in the main document and to investigate in future works. Again, we thank you for your time.
Summary: The paper builds on recent advances in PAC-Bayesian learning and derives new Wasserstein distance-based generalization bounds. Besides batch learning for iid data, a first set of results are derived for online learning (with non iid data). The authors also provide tight bounds for the case of heavy tailed loss with bounded second order moments. Through experimental results the authors argue that these bounds can inspire new learning algorithms that achieve comparable (and in some cases better) empirical performance than classical benchmarks (ERM for batch and OGD for online setting). Strengths: The introduction and main results are well placed in the context of recent literature in this area. Although this reviewer has not worked on the topic of PAC-Bayes learning, it was possible to understand the authors’ contribution relative to work by Haddouche and Guedj, Catoni and co-authors, and recent paper of Amit et al from NeurIPS 2022. One would have expected more clarity on the use of “supermatingale toolbox” in the main text of the paper, but I suppose this can be addressed using minor revisions of the paper. Weaknesses: However, there are several points that impede the broader understanding of the work. Firstly, the role and possible limitations imposed by L-Lipschitz assumption should be clarified. In the discussion of Theorem 1, the authors mention that the variance terms are considered with respect to the prior distributions $\pi_{i,\mathcal{S}}$ and not $\rho$. Can you clearly explain why this is a strength of the result (if so)? In comment after equation (2), I agree that Wasserstein distance as regularizer…allows the user of multiple priors. But the issue of selecting K (as opposed to choice of $\lambda$ in classical algorithm). The authors do provide some discussion of role of $K$ but the nuance of its role in regularization is still not very clear (besides, the discussion on tradeoff needs a bit more clarity). Appreciate the interest in online setting, but the contribution/merits of algorithm based on equation (4) are not very clear. Firstly, what is the conceptual novelty here in comparison to online counterpart to equation (3)? Secondly, the particularization to Dirac masses seems like a restriction. The algorithmic aspects need a bit more clarity as well. The computational results seem promising and an assertive discussion on comparison with ERM and OGD benchmarks is needed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see my comments/suggestion in the Weaknesses section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: Would have liked to see a more robust discussion on the limitations of the proposed approach. A clear explanation about when and to what extent geometry of data space can improve the results obtained through Wasserstein distance based approach would be much appreciated. Also comment on future work to refine results on online setting. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your thoughtful review. We are encouraged to read that 'the introduction and main results are well-placed in the context of recent literature' and that you appreciated the goal of our work, even though 'this reviewer has not worked on the topic of PAC-Bayes learning', as one of our goals is to reach a broader audience. **Role of Lipschitz assumption** As the Wasserstein distance is used in PAC-Bayes as a geometric notion of complexity, the theoretical role of the Lipschitz assumption is to allow the use of such a distance. On the contrary, the KL divergence comes from information theory and does not require such an assumption, but the price to pay is that the KL divergence has erratic behaviours in many cases, starting with Dirac measures. Indeed, a KL divergence between two Dirac measures with disjoint supports is infinite. Thus, Lipschitzness helps to handle a broader range of prior/posterior pairs. However, we agree that the Lipschitzness requirement is a clear limitation (as it is to a wide variety of generalization bounds with similar assumptions) and we will make this more clear in the next version. **Discussion on Theorem 1** Previous PAC-Bayes bounds using supermartingales techniques implied that order 2 moments of the prior *and all considered posteriors* have to be uniformly bounded by a certain $C$. This suggests that those two distributions have to be robust enough to compensate for heavy-tailed data. For instance, if the data distribution of a point $z$ only admits a finite variance and that we consider the loss $\ell(h,z)= |h-z|$ where both $h,z$ lie in the real axis. Then, assuming $\mathbb{E}[z^2]\le V$, then by Cauchy Schwarz, we have $\mathbb{E}[\ell(h,z)^2] \le \mathbb{E}[h^2] + 2V\mathbb{E}[|h|] +V^2 \le C$, the last inequality being our assumption. This invites us to consider posterior distributions with first and second moments satisfying the previous inequality, which is restrictive. On the contrary, our work assumes only such an assumption on the prior distribution: we are allowed to move freely on the space of distribution to find the best posterior. We thank you for this discussion which will enhance our manuscript. We will provide more detail on the high-level ideas of the proof in the next version to explain the use of the 'supermartingale toolbox'. **Role of $K$** We thought the role of $K$ with respect to the notion of generalisation, i.e., having multiple data-dependent priors may tighten the generalisation bound and thus, ensure good theoretical generalisation ability. In this work, we consider regularisation as directly incorporating the generalisation error into the algorithm. Understanding more deeply the role of $K$ into the optimisation procedure is a promising lead for future works. **Online setting** The online counterpart of the classical bound Eq. (3) would involve a KL divergence at each time step while our approach Eq. (4) implies a Wasserstein distance. The important point is that we perpetrate the spirit of Haddouche and Guedj (2022), which showed that the natural online counterpart of the classical PAC-Bayes bound Eq. (3) comes with sound theoretical guarantees. Our approach is similar here: in Theorems 1 and 2, we show that our batch PAC-Bayesian algorithm is derived from sound theoretical bounds, and we show in Theorems 3 and 4 that it is the same for its online counterpart. Concerning the Dirac case, it is indeed a restriction for classical bounds Eq. (3). However, we focused on this important case as it cannot be handled properly by classical PAC-Bayes algorithms as the KL divergence of two Dirac with disjoint supports is infinite. In other words, our framework **does not** require Dirac priors (many other options can be used), but it can actually accommodate Dirac priors as opposed to KL-based bounds. An important message of our work is that the PAC-Bayes learning is rich enough to explain the generalisation ability of deterministic predictors (and not only probabilistic ones). **About the experiments** As suggested, we will include more discussion about the experiments in the next version. We are happy to provide supplementary insights into our experimental conclusions. We also provided new experiments with classical regularisers (see general answer). We thank you again for your review and are happy to include the above comment in the revised version of our work, and hopefully, the camera-ready. --- Rebuttal Comment 1.1: Comment: Thank you for your thoughtful response to my comments. --- Reply to Comment 1.1.1: Comment: We would like to thank you again for replying quickly to our rebuttal and for your review.
Rebuttal 1: Rebuttal: We warmly thank all reviewers for their insightful reviews of our work. We are encouraged to see that our work generated enthusiasm and we answer thoroughly to all of the concerns raised by the reviewers. We address, in this general response, remarks raised by at least two reviewers. **New experiments to study ERM with classical regularisation methods.** Additional experiments have been asked by reviewers jPqk and tL5n to see the performance of classical regularisation methods such as the 'distance to initialisation' $||\mathbf{w}-\mathbf{w}\_0||$ and the weight decay $||\mathbf{w}||$. We point out that the first is a particular case of Algorithm 1 when $K=1$ (i.e., we treat the data as a single batch, and the prior is the data-free initialisation); the results are in Table 3 of Appendix C.4. More precisely, this table shows that, on most of the datasets, considering data-dependent priors leads to sharper results. This shows the efficiency of our method compared to the 'distance to initialisation' regularisation. Furthermore, we ran experiments with the weight decay regularisation which is gathered in the pdf document updated with the rebuttal. This experiment demonstrates that on a few datasets (namely SENSORLESS and YEAST) when our predictors are neural nets, the weight decay regularisation fails to learn while ours succeeds as shown in Tables 1 (of our work) and 4 (of the new pdf). More generally, the weight decay regularisation outperforms our method 3 times over 11 for batch learning with neural nets. **Clarity of Section 4.1** Several comments have pointed out the lack of clarity in the first paragraphs of Section 4.1. We worked on the following points that will appear in the revised version of the paper: 1. We move the second paragraph of the 'Batch algorithm' part before Eq. (5). This makes the loss function we introduced appear more clearly, as suggested by Rev. tL5n. 2. Also, when our predictor space $\mathcal{H}$ is parametrised over $\mathbb{R}^D$ (for a certain $D>0$) equipped with the Euclidean distance $||.||$, we clearly define the distance $d$ between two predictors $h\_{\mathbf{w}},h\_{\mathbf{w}'}\in \mathcal{H}^2$ to be $d(h\_{\mathbf{w}},h\_{\mathbf{w}'})= ||\mathbf{w}-\mathbf{w}'||$. In Eq. (5), we also replace $||\mathbf{w}-\mathbf{w}\_i||$ by $d(h\_{\mathbf{w}},h\_{\mathbf{w}\_i})$ for the sake of consistency. This might clarify the concern of Rev. mMbV on how this Euclidean distance appeared in Eqs. (5),(6),(7) but raises the question of whether our losses are Lipschitz with regard to $\mathbf{w}$ and not $h_{\mathbf{w}}$. As Lemma 8 provided a positive answer for the linear model, we provided a supplementary lemma (with its proof) stating that our neural networks, as long as the weights are clipped (with an arbitrarily high constant), are Lipschitz with respect to their weight, explaining why our learning procedure is consistent with our theorems. As we considered a high value of the clipping, it did not interfere with our optimisation procedure. Those points will be clearly stated in the next version. Again, we thank the reviewers for their work and hope this, as well as individual responses, clarifies all existing concerns. Pdf: /pdf/fa771c8b465ff004a5b4184cebcd2fabcf1134a2.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Why Does Sharpness-Aware Minimization Generalize Better Than SGD?
Accept (poster)
Summary: This paper presents an in-depth theoretical examination of Sharpness-Aware Minimization (SAM) in the context of feature learning. The authors identify the challenge of overfitting in the training of these networks, a problem that becomes more prominent as the model size increases. The traditional gradient-based methods like Gradient Descent (GD) and Stochastic Gradient Descent (SGD) are identified to suffer from unstable training and harmful overfitting. The authors highlight SAM as a promising alternative that has demonstrated improved generalization even in situations where label noise is present. However, they point out the current lack of understanding on why SAM outperforms SGD, especially in nonlinear neural networks and classification tasks. The core contribution of this paper is filling this knowledge gap by providing theoretical reasoning on why SAM has better generalization than SGD, particularly in the context of two-layer convolutional ReLU networks. The authors characterize the conditions under which benign overfitting can occur when using SGD, and further demonstrate a phase transition phenomenon unique to SGD. They formally prove that under these same conditions, SAM can achieve benign overfitting, and therefore has a distinct advantage in terms of generalization error. Specifically, the paper highlights the ability of SAM to mitigate noise learning, which prevents it from succumbing to harmful overfitting early on in the training process. This aspect of SAM allows it to facilitate more effective learning of weak features. The theoretical findings are supported by experiments on both synthetic and real data, bolstering the credibility of the presented theory. Strengths: - **Addressing a Significant Issue**: The authors tackle the important issue of overfitting in the training of large neural networks. They delve into a key challenge facing the field of deep learning, thus making their work relevant and timely. - **Theoretical Contributions**: This paper provides a strong theoretical analysis of why Sharpness-Aware Minimization (SAM) outperforms Stochastic Gradient Descent (SGD), particularly for two-layer convolutional ReLU networks. This contributes to the current understanding of how to improve the generalization of large neural networks. - **Comprehensive Study**: The authors carry out an in-depth comparison of SAM and SGD, using both synthetic and real data. This robust approach adds to the validity and comprehensiveness of their findings. - **Novelty**: The authors claim that this is the first benign overfitting result for a neural network trained with mini-batch SGD. This claim, if validated, could signify a novel contribution to the field. - **Clarity and Organization**: The paper appears to be well-structured and clearly written, with a good overview of the background literature, a clear statement of the problem, and a detailed account of the authors' contributions. Weaknesses: - **Limited Scope**: The paper's focus on two-layer convolutional ReLU networks with fixed second layer might limit the generalizability of its findings to other types of networks. Further research could be needed to determine if SAM has similar benefits for different architectures. - **Presentation Could Be Enhanced**: The current version of the paper could benefit from further polishing in terms of presentation. For instance, the y-axis in Figure 2 lacks clarity — it's not immediately evident whether it represents a normal scale or a logarithmic scale. Also, the 'dimension' it not clear. The term 'clear samples' could be more accurately represented as 'clean samples.' Additionally, there's inconsistency in the usage of 'P-1' and 'P', which should ideally be standardized throughout the paper for improved readability and comprehension. - **Partial Theoretical Results**: The authors provide a thorough analysis of both benign and harmful overfitting scenarios for SGD. However, the corresponding harmful overfitting regime for SAM appears to be absent from the theoretical results, which could make the comparison somewhat incomplete. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Could you please share your thoughts on the potential behavior of SAM within the gray region depicted in Figure 1? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: I believe the authors could provide a more extensive evaluation of potential limitations inherent in their research. Engaging in a comprehensive discussion regarding these constraints would strengthen the overall rigor and transparency of the study. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your strong support. Below, we provide answers to your comments and questions, and we will ensure that the revisions are made in the final draft. **Q1**. Further research could be needed to determine if SAM has similar benefits for different architectures. **A1**. Thank you for your constructive feedback. Since this is the first paper towards formally understanding why SAM outperforms SGD for ReLU networks, we choose to avoid introducing unnecessary complexities in the data/CNN models and the analysis to make the result easy to follow. This ensures clarity and ease of understanding. Exploring more complex data/CNN models certainly presents an exciting avenue for future research, and we plan to investigate it in subsequent studies. --- **Q2**. The y-axis in Figure 2 lacks clarity. There's inconsistency in the usage of 'P-1' and 'P'. **A2**. Thank you for your valuable feedback, the y-axis represents a normal scale with a range of 1000-21000. We will address the issues mentioned above in our final version. As for the usage of ‘P-1’ and ‘P’, the data has P patches, where P-1 patches among them are noise. Therefore, ‘P-1’ and ‘P’ both exist in our paper. However, we understand this may cause confusion and will find a way for clear expression in our final version. --- **Q3**. The corresponding harmful overfitting regime for SAM appears to be absent from the theoretical results, which could make the comparison somewhat incomplete. **A3**. The main focus of our paper is not to provide a complete characterization of benign/harmful regions of SAM but to show that SAM has a larger benign overfitting region than SGD (Figure 1). This suffices to explain why SAM can generalize better than SGD. We leave the investigation of harmful overfitting regime for SAM as a future work. --- **Q4**. Could you please share your thoughts on the potential behavior of SAM within the gray region depicted in Figure 1? **A4**. We conjecture that SAM generalizes badly on the gray region due to our empirical results in Figure 2 where the blue color represents high test error. It is an interesting future direction to prove the phase transition between harmful overfitting and benign overfitting for the SAM algorithm. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal to my comments and questions. I appreciate the time and effort you have put into addressing each of my concerns. As a final step in my review, I would like to request access to the code of the experiments you used during the rebuttal. This will allow me to further verify the results and ensure the integrity of the conclusions drawn. Once again, thank you for your detailed responses. I am satisfied with the reponses you have proposed, and I believe they will enhance the quality and clarity of the paper. --- Reply to Comment 1.1.1: Comment: Thank you very much for your positive feedback! In compliance with the rebuttal and discussion guideline, we have made the code available at an anonymous link and sent it to the Area Chair. We anticipate that you will get access to the code very soon. Once again, we really appreciate your valuable comments and suggestions, which help us improve our work!
Summary: This paper studies the question of why SAM generalizes better than SGD on a specific binary classification task. The task looks like a special case of the sparse coding model where the relevant parameter is a signal-to-noise ratio (SNR). The authors present theoretical results on the performance of SAM and SGD for this specific task. More specifically, their main results say that SGD requires higher SNR in the data distribution to generalize, while SAM can generalize even with smaller SNR. Strengths: - The paper has a very comprehensive end-to-end analysis for neural network learning, which is usually very technically complicated. - The paper is overall well-written and it was easy to follow. - Figure 1 and Figure 2 have very nice resemblance. Weaknesses: - Discussion of the results seems not quite comprehensive; see the comments below. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - The binary classification considered in this work seems to be an instantiation of the well known sparse coding problem. It would be helpful to motivate the model by presenting connection to the well known sparse coding problem. - Connection to large learning rates GD/SGD. Recently there have been several works trying to understand why large learning rate GD/SGD generalize better than small learning rate ones. It would be clarifying to see if that's also the case for the model considered in this work. It seems that there are recent works that show that large learning rates help with generalization for a sparse coding problem (e.g. [1]). Given this, it would be a nice clarification to test at least empirically if one can achieve SAM's performance with larger learning rate GD/SGD. - Related to the comment about the large learning rate, Q. Is the result for SGD also true for Gradient Flow (continuous dynamics of SGD)? - The theoretical results show that SAM updates help prevent memorizing the spurious feature (or harmful overfitting) during the early stage of the training. For the model in this paper, does that mean you can switch from SAM to SGD after a few epochs and basically achieve the same performance as SAM? - The required perturbation radius ($\tau$) in theory scales inversely with $\sqrt{d}$, which seems much smaller than what people choose in practice. Does this requirement consistent with the experiments in the paper? I think it's an important point since a sufficiently large $\tau$ distinguishes SAM from SGD. - The original motivation for SAM was to encourage the landscape around the solution to be more flat. Does that intuition also hold true for the sparse coding problem considered in this work? Does the prevention of harmful memorization lead to a flatter landscape when you consider the flatness around the iterates of SAM? - Does the oscillation of SAM (dynamics characterized in the previous works mentioned in this paper) also occur in the training dynamics of SAM for the model in this paper? If so, does the oscillation another explanation as to why SAM does not learn spurious features? - In the experiments, it seems that only full-batch GD/SAM are tested for the synthetic model setting. Could you comment on the behavior for SGD and Stochastic SAM? - Also, many recent works have studied un-normalized variant of SAM (usually referred to as USAM), where the ascent step normalization is removed. Are the main results for SAM also true for USAM or is the normalization necessary for the main results? - In the related work section, it would be nice to see discussion on other techniques (large learning rates, label noise etc) as there have been many works on their effects on generalization (e.g. [1] [2] [3]). - [1] Learning threshold neurons via the "edge of stability" (https://arxiv.org/abs/2212.07469) - [2] SGD with Large Step Sizes Learns Sparse Features (https://arxiv.org/abs/2210.05337) - [3] What Happens after SGD Reaches Zero Loss? --A Mathematical Framework (https://arxiv.org/abs/2110.06914) - [4] Label Noise SGD Provably Prefers Flat Global Minimizers (https://arxiv.org/abs/2106.06530) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Please see the comments in the Question section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback! Due to space limits, we answer your major comments and questions as follows. **Q1**. It would be helpful to motivate the model by presenting a connection to the well known sparse coding problem. **A1**. Thank you for bringing our attention to the sparse coding problem. This is definitely a good way to motivate our study. We will emphasize the connection of our data model to the sparse coding problem in the revision and add the related work. --- **Q2**. Connection to large learning rates GD/SGD … It would be a nice clarification to test at least empirically if one can achieve SAM's performance with larger learning rate GD/SGD. **A2**. Thank you for pointing out the relevant studies and suggesting to test larger learning rates for GD/SGD. We will cite these works and study the effects of learning rate variations in our revision. Firstly, we apologize for the oversight in Section 5, where we mistakenly mentioned a learning rate of 0.1, which should be 0.01. This will be corrected in the revision. Additionally, we've conducted a detailed ablation study on the learning rate in the uploaded PDF. Specifically, we tested the implications of larger learning rates 0.1 and 1 with the same condition in Section 5. Figure 2 in the uploaded pdf indicates that slightly larger learning rates indeed boosts the generalization performance of SGD. The benign overfitting region is enlarged for learning rates of 0.1 and 1 when contrasted with 0.01. This trend resonates with the findings of the studies you've recommended. Importantly, even with this expansion, the benign overfitting region remains smaller than what is empirically observed with SAM. Our conclusion is that while SGD with a larger learning rate exhibits improved generalization, it still falls short of matching SAM's performance. We'll integrate these observations and provide a more comprehensive discussion on learning rate in the revision. --- **Q3**. Is the result for SGD also true for Gradient Flow (continuous dynamics of SGD)? **A3**. We require only an upper bound for the learning rate $\eta$. Therefore, our theory is applicable for any sufficiently small learning rate and should potentially be extended to gradient flow. --- **Q4**. Does that mean you can switch from SAM to SGD after a few epochs and basically achieve the same performance as SAM? **A4**. Yes, under the model studied in this paper, transitioning from SAM to SGD after certain epochs yields performance comparable to that of SAM. This observation aligns with practical findings, such as Figure 9 in Andriushchenko and Flammarion (2022) . --- **Q5**. The required perturbation radius in theory scales inversely with $\sqrt{d}$, which seems much smaller than what people choose in practice. **A5**. The required perturbation radius in Theorem 4.1 is $\tau = \Theta(m\sqrt{B}/P\sigma_p\sqrt{d})$, which depends on many factors and also hides the constant. Take the synthetic data as an example, we have $(m \sqrt{B} / P \sigma_{p} \sqrt{d} = 10 * \sqrt{20} / (2 * 1 * \sqrt{10000}) \approx 0.22 $ which is not small at all. We choose $\tau = 0.03$ due to the constant omitted in the bound. --- **Q6**. Does the prevention of harmful memorization lead to a flatter landscape when you consider the flatness around the iterates of SAM? **A6**. Previous papers focus on the flatness (sharpness) to explain the success of SAM. Our paper provides a different perspective for why SAM generalizes better than SGD. In our current analysis, we were not able to show that the prevention of harmful memorization can make SAM converge to flat minima. Yet this is a very interesting and important question to study in the future. --- **Q7**. Does the oscillation of SAM also occur in the training dynamics of SAM for the model in this paper? **A7**. Thank you for this insightful question. Our current theory does not imply the oscillation of SAM dynamics. In our experiments, we did not visualize the dynamics of SAM iterates due to the very complex landscape of the ReLU network (2-layer ReLU on synthetic data and ResNet50/WRN-16-8 on CIFAR10). It is also not easy to visualize the neural network weights in a 2D/3D space. We plan to investigate this question in our future work. --- **Q8**. Could you comment on the behavior for SGD and Stochastic SAM? **A8**. The comparison keeps the same for SGD and SAM with smaller batch sizes. We have provided additional experiments for them. Please see Figure 1 in the uploaded pdf. --- **Q9**. Are the main results for SAM also true for USAM or is the normalization necessary for the main results? **A9**. The main results for our results relied on the normalization and we carefully characterize it in our proof. Our results do not directly apply to the unnormalized SAM. --- **Q10**. In the related work section, it would be nice to see discussion on other techniques. **A10**. Thank you for your suggestion. We will discuss these techniques and related works in our revision. --- [1] Ahn et al., "Learning threshold neurons via the" edge of stability"." arXiv preprint, 2022. [2] Andriushchenko et al. "Sgd with large step sizes learns sparse features." ICML, 2023. [3] Li et al., "What Happens after SGD Reaches Zero Loss?--A Mathematical Framework." arXiv preprint, 2021. [4] Damian et al., "Label noise sgd provably prefers flat global minimizers." NeurIPS, 2021. --- Rebuttal Comment 1.1: Title: Thank you for your response Comment: I read the author response, and it's excellent. Assuming that the authors will reflect all of them in the final version, I raise my score to 8. Great work and congrats! Also, regarding Q9 "Are the main results for SAM also true for USAM or is the normalization necessary for the main results?," if the authors have some intelligent things to say regarding that question, please add the results/discussions in the final version. It would be very helpful for the readers in light of the following recent work: Dai et al. 2023 (See the reference below.) In a nutshell, this recent work seems to show that for SAM's practicality, the normalization part is **necessary**. Since the theoretical results in this work show that the normalization step is necessary for the generalization of the resulting solution, this would give a lot of intuitions for practitioners and valuable insights for the ML community! Dai et al. "The Crucial Role of Normalization in Sharpness-Aware Minimization" (https://arxiv.org/abs/2305.15287) --- Reply to Comment 1.1.1: Title: Thank you! Comment: Thank you for raising the score and for your positive feedback! We will make sure to incorporate all the promised changes into our final version. Thank you for pointing out the additional related work on the necessity of normalization in SAM. We will discuss this work as well and articulate our insights in the final version. Indeed, understanding the role of normalization is an important problem that may gain some insights from our theoretical analysis.
Summary: The paper aims to provide a theoretical basis for the superiority of SAM over SGD. Different from former explanation based on Hessian information, the authors firstly discuss the loss landscape of non-smooth neural networks like two-layer convolutional ReLU networks. Notably, the paper proves that under conditions of harmful overfitting using SGD, SAM can deliver benign overfitting and outperform SGD in terms of generalization error. The experiments are conducted to demonstrate that SAM can outperforms SGD in terms of generalization error by mitigating noise learning and enhancing the efficiency of weak feature learning. Strengths: - The motivation is clear and novel. Previous researches focused on shallow model with implicit smooth loss. As a more challenging setting, the paper studies two-layer convolutional networks with ReLU units. - The theoretical analysis is convincing. In terms of benign overfitting, the authors discuss the phenomenon and condition in SGD firstly. In harmful overfitting, the authors theoretically prove that SAM can outperform SGD in the early stage of learning. Weaknesses: - The experiment results may not be reliable. In Section 5, the author sets the learning rate to 0.1. However, according to recent research [1] and my practical experience, I suspect the learning rate is too high and may lead the model to overfit easily. The model may not easily overfit if we use SGD with suitable learning rate. Hence, it might be better to add an experiment and discuss the influence of learning rate. - Potential overclaiming based on the analysis and experiment. At the beginning of the paper, the authors claim that SAM can help model learn weak features effectively. However, I do not see obvious related content that can support the authors claim. I recommend the authors modify their claims or try to discuss the relationship between SAM and weak features more clearly. [1] Andriushchenko et al. “A Modern Look at the Relationship between Sharpness and Generalization”, ICML, 2023. ------- After reading the responses from the authors, my concerns has been well addressed. I will keep my score as weak accept. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: See weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback. We provide our responses below and will make the corresponding adjustments in the final version. **Q1**. I suspect the learning rate 0.1 is too high … It might be better to add an experiment and discuss the influence of the learning rate. **A1**. Thank you for drawing attention to the recent work [1]. We'll cite and discuss it in the related work section. Additionally, we appreciate your keen observation regarding the learning rate. We indeed used a learning rate of 0.01, not 0.1, in Section 5. We will fix this typo in the revision. Furthermore, we've conducted an ablation study on the learning rate, which is presented in the uploaded pdf. Specifically, we experimented with learning rates of 0.001, 0.01, 0.1, and 1 under the conditions described in Section 5. Figure 2 in the uploaded PDF shows that for learning rates of 0.01 and 0.001, the patterns of harmful and benign overfitting are quite similar and consistent with our Theorem 3.2. Since SGD with a small learning rate needs a longer time to converge, we didn't finish the experiments for extremely smaller learning rates due to the time limit. But given that Theorem 3.2 holds when the learning rate $\eta$ is sufficiently small, we believe that the phase transition for a smaller learning rate would also align with the results from learning rates of 0.01 and 0.001 if training longer. Interestingly, we observed that larger learning rates like 0.1 and 1 enhance SGD's generalization performance, but are still worse than SAM. The benign overfitting region expands for learning rates of 0.1 and 1 when compared with 0.01 and 0.001. This observation might be tied to the phenomenon highlighted by Reviewer CEVG. We'll incorporate these findings and further discuss the impact of learning rate variations in our revision. [1] Andriushchenko et al. “A Modern Look at the Relationship between Sharpness and Generalization”, ICML, 2023. --- **Q2**. I recommend the authors modify their claims or try to discuss the relationship between SAM and weak features more clearly. **A2**. Thank you for your feedback. We believe there may be some confusion caused by the term "weak feature". In our paper, 'weak features' refer to features when the signal-to-noise ratio is low. We will clarify this point by removing “weak” in our revision. Our claim was based on the fact that, for benign overfitting, SGD requires the norm of features to be at least in the order of $d^{1/4}\sigma_p$, while SAM only necessitates the norm of features to be almost a constant. This weaker requirement of SAM allows it to learn features more effectively compared to SGD. Figure 1 in our manuscript aims to provide an intuitive visualization of this result. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. My concerns have been well addressed, so I will keep my score as weak accept.
Summary: This paper presents two theoretical contributions regarding benign/malign overfitting of two-layer convolutional ReLU neural networks. For an idealized data distribution, it i) gives the conditions (with respect to the dimension of the data and to the signal strength) under which benign/malign overfitting occurs when training the network with mini-batch-SGD; ii) shows that training the network with SAM leads benign overfitting more often than SGD and in particular in many situations where SGD suffers from malign overfitting. These findings are illustrated by a numerical experiment conducted in the idealized situation used for the theoretical analysis. Strengths: The two contributions presented in this paper are significant because Contribution i) improves on the literature, in particular with respect to the closest work (Kou et al., 2023), which proposes exactly the same analyses but for a gradient descent training; Contribution ii) is, up to my knowledge, the first such result for SAM. The latter result provides a comprehension on the well-known fact that SAM helps networks to generalize better than SGD. This is a real breakthrough, which should be welcomed by the community. Weaknesses: I was very enthusiastic by the results presented in this paper but I became disillusioned when reading it. The presentation is disordered and in particular, notations are not all defined, making the results (even informally stated) very difficult to understand. After having a glance at the appendix (which seems to be rigorously written), it seems that the manuscript is an assembly of results extracted from the appendix, unfortunately awkwardly built. As for me, it is a pity because I think that the results are important and of interest for the community, but the presentation is inadequate and not clear enough. Comments: 1) The abstract misses to state Contribution i) (which constitutes a large part of the manuscript and which is clearly reminded in the conclusion). 2) Definition 2.1 is difficult to understand. In particular, the parts “signal contained in each data point” and “the other are given by” are unclear. Besides, even though this model seems common in the recent literature, a discussion regarding its limitations, for instance independence between labels and covariates, would be appreciated. The same remark can be done regarding the architecture of the neural network considered (Section 2.2). In addition, Condition 3.1 is discussed only quickly Line 142 ; a deeper discussion regarding the role of the variances and the polynomial degrees appearing would be appreciated. 3) There is no link between statements/results in the paper and their counterparts in the appendix. Since the reader is supposed to juggle the two, it would be helpful to know where are the formal statements and the proofs in the appendix. 4) I understand that authors try to explain the derivation of their main results but I am not very comfortable with informal statements, first because of the notation problem previously stated and also because some conclusions are given with vague explanations (in my case they are difficult to grasp) while they seem to be quite difficult to obtain formally. This is the case for instance Line 218 regarding the symmetry of $\rho$. 5) Line 117, the authors invoke the discontinuity of the gradient to justify the need of an analysis based on an other technique than the Taylor expansion. As for me an even better argument is that the ReLU function is formally not differentiable everywhere. 6) The statement of Theorem 4.1 is a bit unclear: the parts “we train/we can train” could be replaced by passive forms, “neural networks” refers, as far as I understand, to the chosen architecture. In addition, mentioning SGD in a result concerning SAM may throw the reader. It could be specified beforehand that SAM training uses intrinsically SGD (but at a point which is not the current iterate). 7) Lemma 4.3 is true only for a particular choice of $\tau$, as stated in Theorem 4.1 (I am not sure that this is clear in Lemma C.5). Since this result is important, this should appear in Lemma 4.3 and discussed after. 8) The related work section (Section 6) appears at the end of the manuscript but is an enumeration of papers. Such an enumeration is generally well placed just after the introduction. Placing the related work section after the result statements makes sense if it discusses technical differences with the closest papers. As it happens, this work seems to be based to a certain extent on (Kou et al., 2023). Thus, it would be enriching to discuss the contribution, and particularly the technical novelties (GD → SGD/SAM), of this work with respect to (Kou et al., 2023). 9) Mathematical remarks: Line 47, $t$, $\mathbf W^{(t)}$, $\boldsymbol \mu$, $L_{\mathcal D}^{0-1}$, $p$, $\Omega$ are not defined; Lines 47, 150, 240 and after, the expression “converges to $\epsilon$” should be replaced by “converges to $0$”; Line 66, $l_2$ → $\ell_2$, Line 68, absolute values in $|a_k/b_k|$ seem useless; “omit logarithmic terms” should be defined explicitly; Line 97, $\mathbf W$ is not defined; Line 100, “is a collection of” means “is a matrix”, doesn’t it? Line 101, $[n]$ is not defined; Line 113, it is specified that $\sigma_0^2$ is the variance of the normal distribution but not Line 80; in Equation (5), the 2-norm should be a Frobienus norm; what is the utility of Equation (6) with respect to the equation Line 175? Line 121, $S_i^{(t, b)}$ → $\tilde S_i^{(t, b)}$; Line 219, $T^*$ is not defined; Lines 218 and 226, the authors could remind what are $H$ and $B$; Line 226, I understand that $|\cdot|$ is the cardinality but this is not stated. 10) Typographical remarks: Line 9, “for the certain” → “for a certain”; Line 27, “with minimal gradient” → “with minimal gradient norm”; Figure 1, blue and yellow are inverted; Line 102, “cross-entropy loss function” and “logistic loss” are redundant; Line 115, “indicator function” → “the indicator function”; Line 156, “Bayesian optimal risk” → “Bayes risk”; full points are missing Lines 202, 207, 214; Line 237: “iteration” → “iterations”; Line 251, full point instead of colon; Figure 2, y-label is cut; Line 299, “an generalization” → “a generalization”. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: 1) Equations Line 175 and 176 are given as a definition (Definition 3.3), but appear to be a result of the data distribution and of the network architecture chosen. Why are they presented as a definition? 2) The numerical experiment is performed with gradient descent instead of SGD, while the theoretical analysis deeply relies on SGD. Why? Are results identical with SGD? 3) Similarly, are the theoretical results for SAM the same if optimization is performed with gradient descent? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 1 poor Contribution: 4 excellent Limitations: See above regarding mathematical assumptions. Societal impact is not addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback. Due to space limits, we will address your major comments and questions as follows. We will revise the corresponding parts accordingly as well as address the minor points in the final version. **Q1**. The abstract misses to state Contribution i). **A1**. Thank you for your suggestion. We will add the contribution i) in the abstract of our final version. --- **Q2**. Definition 2.1 is difficult to understand. A discussion regarding its limitations, for instance independence between labels and covariates, would be appreciated. **A2**. Thank you for your feedback. We will add more explanation of Definition 2.1 and the architecture of neural networks in our final version for a clear presentation. Regarding the “independence between label and covariates,” we believe there is a misunderstanding. For any covariate $x=[x^{(1)},... x^{(P)}]$, there is exactly one $x^{(j)} = y\cdot\mu$, and the others are random Gaussian vectors. For example, the covariates $x$ could be $[y\cdot\mu, \xi, \ldots, \xi]$, $[\xi,\ldots, y\cdot\mu, \xi]$ or $[\xi, \ldots, \xi, y\cdot\mu]$. The signal patch $y\cdot\mu$ can appear at any position. So $x$ and $y$ are not independent. We will make it clearer in the final version. --- **Q3**. It would be helpful to know where the formal statements and the proofs are in the appendix. **A3**. Thank you for your suggestion. We will add pointers in our final version. In particular, Theorem 1.1 is the informal statement of Theorem 3.2 and Theorem 4.1. Lemma 3.4 is the informal statement of Lemma B.8. Lemma 3.5 is the informal statement of Lemma A.6. Lemma 4.3 is the informal statement of Lemma C.5. --- **Q4**. Not very comfortable with informal statements, for instance Line 218 regarding the symmetry of $\rho$. **A4**. We’re sorry for the confusion. We utilized the term "symmetry" to describe a situation where the summation of $\bar{\rho}$ corresponding to different samples yields similar values. More precisely, the difference between these values can be bounded by a small constant: $\sum_{r=1}^{m}\zeta_{y_i,r,i}^{(t,b_1)}-\sum_{r=1}^{m}\zeta_{y_k,r,k}^{(t,b_2)}\leq\kappa$, as indicated in lines 203-204. We will remove the word 'symmetry' from the paper and revise this part accordingly. --- **Q5**. Lemma 4.3 is true only for a particular choice of $\tau$, as stated in Theorem 4.1. **A5**. We’re sorry for missing the condition in Lemma 4.3. It requires $\tau = \Theta\Big(\frac{m\sqrt{B}}{P\sigma_{p}\sqrt{d}}\Big)$. --- **Q6**. Discuss the technical novelties of this work with respect to (Kou et al., 2023). **A6**. The key technical challenges and novelties of our work compared with Kou et al. 2023 are highlighted as follows: - We studied SGD rather than GD. As the mini-batch update of SGD only utilizes only a small part of samples, different samples contribute to the update of coefficients differently. Consequently, the noisy samples may deviate the learning process. Thus, we have developed techniques to control the update of coefficients at both batch-level and epoch-level. Specifically, Lemma B.7 bounds the batch-level update, and Lemma B.8 controls the epoch-level update and aligns the update of batch-level and epoch-level. - We also provide a novel analysis for SAM, which is very different from GD/SGD. SAM has a completely different neuron activation pattern from SGD. The activation pattern is based on the perturbed weight $\mathbf{w} + \mathbf{\epsilon}$ in SAM, rather than the ​​unperturbed weight $\mathbf{w}$ as in SGD. The perturbation $ \mathbf{\epsilon}$ introduces difficulties in the analysis. We discussed the difficulties and our techniques to tackle them in Section 4.1. More specifically, we decompose SAM updates into SGD step and perturbation step and connect them through Lemma 4.3. We show that if a neuron is activated by noises in the SGD step, it will subsequently become deactivated for the perturbation step. This technique is new and has never been used in prior works such as Kou et al., 2023. It is pivotal for the analysis of SAM. --- **Q7**. Mathematical and typographical remarks. **A7**. - Line 100, The phrase "a collection of" can be interpreted as a new matrix or as a tensor of the weight matrix. - Equation (6) is a variant of Line 175, which we refer to for technical clarity. By further decomposing the coefficient $\rho_{j,r,i}^{(t,b)}$ into $\overline{\rho}_{j,r,i}^{(t,b)}$ and $\underline{\rho}\_{j,r,i}\^{(t,b)}$, we can streamline our proof. - $T^*$ is defined in section B.1.2 as $T^* = \eta^{-1} \text{poly}(\epsilon^{-1}, d, n,m)$. --- **Q8**. Why Equations Line 175 and 176 are given as a definition? **A8**. Thank you for your suggestion. We will present it as a lemma instead of a definition in the revision to ensure clarity. --- **Q9**. The numerical experiment is performed with gradient descent instead of SGD, while the theoretical analysis deeply relies on SGD. Why? Are results identical with SGD? **A9**. Our results also apply to GD, since GD can be viewed as a special case of SGD where the batch size is equal to the dataset size ($B = n$). We have additional experiments of SGD (batch=1024) on the real data set in Appendix D, and it is also consistent with our theoretical analysis. We have also added an experiment of SGD with mini-batch size 10 on the synthetic data in Figure 1 of the uploaded PDF. If we compare Figure 1 of the uploaded PDF and Figure 2 of our main paper, the comparison results remain the same. --- **Q10**. Are the theoretical results for SAM the same if optimization is performed with gradient descent? **A10**. Yes, since SAM with full gradient is a special case of SAM with stochastic gradient when the batch size is equal to the dataset size (B = n), our theoretical results also hold for SAM with full gradient. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their rebuttal, which I read carefully. I acknowledge that authors accept to change the paper presentation according to my comments and I agree to increase my score consequently. Since I do not have access to the revised version, the score is increased of one level. --- Reply to Comment 1.1.1: Title: Thank you! Comment: Thank you for raising the score and for your constructive comments! We will be sure to incorporate your suggested changes and revise our paper accordingly in the final version.
Rebuttal 1: Rebuttal: We want to thank all the reviewers for their valuable comments. In the uploaded pdf, we include additional empirical results to address reviewers' concern including Figures 1 and 2. + **Figure1**: To address Reviewer TY4G’s concern that our synthetic experiment is only performed with gradient descent instead of SGD, we have added an experiment of SGD with mini batch size 10 on the synthetic data. If we compare this Figure 1 of the uploaded PDF and Figure 2 of our main paper, the comparison results should remain the same and both support our main Theorems 3.2 and 4.1. + **Figure2**: Reviewers jpeN and CEVG raise an intriguing direction that whether tuning ​​learning rate can get better generalization performance and achieve SAM's performance. Therefore, We've conducted an extended study on the learning rate. Specifically, we experimented with learning rates of 0.001, 0.01, 0.1, and 1 under the same conditions described in Section 5. The results show that for all learning rates, the patterns of harmful and benign overfitting are quite similar and consistent with our Theorem 3.2. We observed that larger learning rates such as 0.1 and 1 can enhance SGD's generalization performance. This observation might be related to the phenomenon pointed out by Reviewer CEVG. Importantly, for all learning rates, the benign overfitting regions remain smaller than what is empirically observed with SAM. Our conclusion is that while SGD with a more considerable learning rate exhibits improved generalization, it still falls short of matching SAM's performance. Pdf: /pdf/a91c80673fd8060c79153d4b605ffe4e77680c46.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Learning bounded-degree polytrees with samples
Reject
Summary: The paper describes an approach for learning bounded in-degree polytrees that a family of Bayesian networks. More precisely, given the skeleton of the polytree $P$ from which the samples are from, their algorithm learns a $d$-polytree whose distribution is likely to be close to $P$ (with respect to KL divergence) using mutual information tests. Importantly, the algorithm runs in polynomial time for a fixed $d$, whereas the exact learning problem is known to be NP-hard for $d > 1$. Strengths: To my knowledge, the theoretical results are novel and show that even though the optimal $d$-polytrees are hard to learn exactly, it can be done approximatively. Weaknesses: My main concern is in the relevance of the article to AI community, i.e., it has nice theoretical results, but their practicality remains unclear to me (see Questions section of this review). I would be happy to increase my score if the authors can offer convincing arguments for this. I also recommend carefully proofreading the paper to improve its presentation. To mention some of the minor issues: - 139: "We denote $\pi(v)$ to denote" - 143: The definition of deg-l v-structure should probably include the lack of edges between $u_i$ and $u_j$? Of course, that holds implicitly for forests. - 143: "We say that -- is said to be" - 153: Meek [1995] -> [Meek, 1995] - 186: has -> have Technical Quality: 3 good Clarity: 2 fair Questions for Authors: What kinds of instances would the described algorithms be practical for? In the description of the Algorithm 3, you iterate over all $O(n^d)$ sets of neighbors of size $d$ and compute the estimated mutual information for them. If additionally the number of samples is of order of magnitude $2^d n / \epsilon$ (line 242), for how large $n$ and $d$ would the described algorithm run in a reasonable time? In Lemma 5, you state that the mutual information tests succeed with probability at least $1 - \delta$. However, for example, line 197 states that "the algorithm does not make mistakes for orientations" and line 201 seems to imply that $\hat{I}$ must always be less than $\epsilon$. Am I misunderstanding something or shouldn't there be a risk of erroneus orientations? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: See Questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Motivation and practicality** The structure learning problem for high-dimensional distributions has been widely studied in the machine learning community for the last four decades (e.g., Chapters 16-20 of [KF09]). In particular, learning polytree Bayes nets is of great interest, because polytree models admit efficient exact inference using a classic belief-propagation algorithm [KP83]. The book [PNM08] gives a comprehensive, though a bit outdated, survey of applications of Bayes nets. The focus of our work is to advance our understanding of the fundamental achievability and limits of learning high-dimensional graphical models. While advancing the performance of structure learning algorithms in practice is very important in its own right, this is something that is out of the scope of our current work. We note that such theoretical results are explicitly in scope for NeurIPS (cf. the call for paper, and the “Theory” topic within). **Typos** Thank you for catching the typos (all are valid). We will fix them in our revision and also do a more careful proofreading. **Confusion about Lines 197 and 201** Our analysis operate under the event that Lemma 5 succeeds, which happens with probability at least $1-\delta$. Under this event, the orientations made in our algorithm are guaranteed to be correct due to Lemma 5 point 1 and $C < 1$. In the event that any single subroutine fails, we declare that the whole algorithm failed. However, this happens with very low probability (at most $\delta$) and we can reduce the failure probability by supplying more samples. **References** [KF09] Koller, Friedman. Probabilistic Graphical Models - Principles and Techniques. MIT Press, 2009. [PNM08] Pourret, Na, Marcot. Bayesian networks: a practical guide to applications. John Wiley \& Sons, 2008. [KP83] Kim, Pearl. A computational model for causal and diagnostic reasoning in inference systems. IJCAI 1983. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: I thank the authors for their response. Although I still have some concerns about the computational details and expressive capabilities of d-polytrees in the sense of the number of distributions they can represent, I am satisfied with the answers. I have updated my rating accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for increasing the score! In our revision, we will incorporate the points discussed in the rebuttal.
Summary: This paper considers the number of samples to learn a particular class of distributions: bounded-degree polytrees (Bayesian networks whose skeleton is a forest). Recent work has shown that tree-structured Bayesian networks (1-polytrees) are learnable with finite samples; this work makes progress on the natural generalization to polytrees, showing a positive result when the skeleton is given. The work also provides some conditions under which the skeleton is learnable, and a lower bound for the number of samples required. Strengths: Learning a distribution approximately from finite samples is one of the most fundamental tasks in learning theory. This study of the finite-sample learnability of polytrees is a very natural step for building our understanding of this problem, particularly in the context of the recent work showing learnability for tree-structured models. The main result of Theorem 1 (finite-sample learnability of degree-bounded polytrees given the skeleton) is quite fundamental. The algorithm and proof are generally quite natural, and furthermore they help demonstrate the clean manner in which the mutual information tester machinery of [Bhattacharyya et al., 2021] can be leveraged for such results. While accompanying results in Section 4 (Skeleton assumption) and Section 5 (Lower bounds) are less surprising, their presence adds more completeness to the general picture. The paper is generally well-written. Weaknesses: More motivation for studying polytrees might be appreciated by the general NeurIPS community. Regardless, Bayesian networks are well-motivated and polytrees are a natural continuation of the aforementioned recent work. The assumption of being given the skeleton is perhaps the most unsatisfying aspect of these results. For context, my understanding is that when learning tree-structured models (as is the focus of the main prior work of [Bhattacharyya et al., 2021]), the entire task is determining the skeleton, as any rooting of the tree is equivalent. In this sense, it is somewhat disappointing that the entire task of the main prior work needs to be given to the polytree learning algorithm. It would be nice to know whether this assumption is inherently required or just an artifact of the current algorithm. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Is there more discussion that you could provide regarding the necessity of assuming the skeleton is known? Here are some questions that may help orient the discussion (although I do not necessarily expect these to be reasonable to answer in this scope): * Generally, is there clear intuition whether a similar result to Theorem 1 should hold without being given the skeleton? On one hand, it seems plausible to imagine that if the skeleton is hard to learn, then perhaps the choice of skeleton is not so important. On the other hand, it seems plausible that it is hard to learn the skeleton and there are many approximately correct skeletons, but orienting an only approximately correct skeleton is hard (maybe this has some connection to the hardness in the unrealizable setting). * Is it clear that the Chow-Liu algorithm does not learn an approximately correct skeleton? (Of course, whether having such a skeleton is helpful seems not necessarily obvious.) Minor remarks: * Should it be $\hat{I}$ in line 145? * Line 194 “algorithm 1” should be capitalized/linked. * Line 269 “or” -> “of” * Generally, it seems like there may be minor errors in comments involving $I, \hat{I}$, and $C$. For example, on lines 422-423 it says “$\hat{I}(\dots) \le \varepsilon$. Since $0<C<1$, this implies that $\hat{I}(\dots)\le C \cdot \varepsilon$”. Since $C<1$ it is not clear to my why such an implication would hold. A similar remark is made on lines 436-437. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The limitations are addressed fairly in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Typos regarding $\hat{I}$ and $C \cdot \varepsilon$** Thank you for pointing this out, we indeed compare $\hat{I}$ with $C \cdot \varepsilon$. We will fix these in our revision. **Other typos and writing suggestions under minor remarks** Thank you. We will fix the typos and incorporate your writing suggestions in our revision. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response, and particularly their personal intuitions regarding key difficulties for the task without assuming knowledge of the skeleton. I still believe the primary weakness is how the paper does not resolve whether such an assumption is truly necessary. I have kept the rating as-is. --- Reply to Comment 1.1.1: Comment: Thank you once again for your review! In our revision, we will incorporate the points discussed in the rebuttal.
Summary: This paper introduces an efficient learning algorithm for bounded degree polytrees and establishes finite-sample guarantees. Explicit sample complexity and polynomial time complexity are provided. An information-theoretic lower bound is provided, which shows that the sample complexity of the algorithm is nearly tight. Strengths: The paper provides a novel algorithm for learning d-polytrees with general d, extending a previous algorithm for d=1. The theoretical analysis shows that the algorithm is nearly tight in terms of sample complexity. The results do not require distributional assumptions such as strong faithfulness. The ideas and results are clearly presented in the paper. Weaknesses: The recovery of the true skeleton relies on Assumption 11. It would be nice if some comments on this assumption could be given (e.g. whether it is expected to be tight) Technical Quality: 3 good Clarity: 3 good Questions for Authors: The recovery of the true skeleton relies on Assumption 11. Is this assumption expected to be tight, and what is the obstacle for skeleton recovery in more general scenarios? In line 233, there seems to be a redundant "then" Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the limitations in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Tightness and violation of Assumption 11** You are right that the skeletal assumption is crucial in our algorithm and analysis. The sufficient condition is a useful proxy check for the applicability of our methods. If one believes that the sufficient conditions hold in a dataset of interest, then one can be assured that the theoretical guarantees follow. We are unaware whether Assumption 11 is tight. Meanwhile, if Assumption 11 is violated, then the natural algorithm to recover skeleton by running Chow-Liu may fail. One example is to consider the ground truth skeleton of $X-Y-Z$ where $\max(I(X, Y), I(Y, Z)) \ll \varepsilon$, and $\varepsilon$ is the accuracy parameter. Due to sampling error (of using only $Poly(1/\varepsilon)$ finite samples), $\hat{I}(X, Z)$ could potentially be larger than $\hat{I}(X, Y) \text{ and } \hat{I}(Y, Z)$; and thus could have $X-Z$ connected as a result of running Chow-Liu. Moreover, there may be other ways to recover the true skeleton under other sets of assumptions; e.g, the results of [BH20] apply for more general Bayes nets under different assumptions. Meanwhile, note that the information theoretic lower bound in Section 5 that we give holds even when the skeleton is known. **Typo on Line 233** Thank you for pointing out the typo. We will fix it in our revision. **References** [BH20] Bank, Honorio. Provable Efficient Skeleton Learning of Encodable Discrete Bayes Nets in Poly-Time and Sample Complexity. ISIT 2020. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response! --- Reply to Comment 1.1.1: Comment: Thank you once again for your review! In our revision, we will incorporate the points discussed in the rebuttal.
Summary: The paper gives an efficient PAC-learning algorithm for learning graphical models called "bounded polytrees". These are distributions where 1) the undirected skeleton of the graph is a forest and 2) the in-degree of every node is bounded by some constant $d$. This extends a recent result [1] for directed trees, which is corresponding to the case $d=1$. In contrast to [1], the paper gives a learning algorithm assuming that the skeleton is given. To achieve that, the estimator of conditional mutual information from [1] is extensively used. This estimator is used in a sequence of clever greedy-like checks in order to orient as many edges as possible. After orienting the remaining edges, it is shown that the resulting distribution must have small KL divergence to the true distribution. A sufficient condition is also given, under which the skeleton can be learned for certain distributions by the Chow-Liu algorithm (so it does not have to be given to the algorithm). Finally, a lower bound on sample complexity is proved, roughly matching the upper bound of the algorithm in the case of binary alphabet. [1] Bhattacharyya, Gayen, Price, Vinodchandran, "Near-optimal learning of tree-structured distributions by Chow-Liu", STOC 2021. Strengths: * The studied problem of efficient learning of graphical models is important and interesting. * The paper considers distributions with a tree skeleton and arbitrary orientation of edges as opposed to just directed trees. This is a natural and long-studied class of distributions. * Even given the estimator from [1], the algorithm and proofs are interesting and not trivial. * Section 3 gives a good outline of the algorithm and its correctness proof and the figures were helpful to me. Weaknesses: * The algorithm requires the skeleton as input, which I think is a significant limitation. It is not clear how useful is the sufficient condition proposed by the authors in order to remove this limitation. * The writing could be clearer. Especially the steps which I assume are more standard/obvious to the authors felt rushed. In my opinion, a few places could be rewritten in order to be clearer and more self-contained. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * The proof of Theorem 1 in lines 219-227 is very fast. You state the part about estimating the conditional distributions in a conclusory way. I thought the reference for this would be Theorem 1.4 in [1], but you cite two other papers instead. And the sample complexity there seems to have factor $|\Sigma|^{d+1}$, not $|\Sigma|^d$. Sorry if I am misunderstanding. The formulas in lines 225-227 are given without any justification and connecting them to the discussion before. * In the proof of Lemma 7, lines 421-422, I cannot see why Phase 1 guarantees the existence of this vertex. Can you explain? (By the way, the proof says $u\in S$ and the statement $u\in S\cup S'$. Which is it?) (Also I don't understand why it says $\hat{I}<\epsilon$ if the algorithm is always checking against $C\epsilon$.) * Section 4 also moves fast. In Assumption 11, do you mean that given $P$ and $G^*$, the assumption holds for those $P$ and $G^*$ (and then $G^*$ will be recovered by Chow-Liu run on $P$)? This is not clear from the writing. Also, is $P$ any distribution or is it coming from a tree? * In section 5, Lemma 14 seems given without proof as a direct consequence of Lemma 13. Shouldn't you also exclude the possibility that there exists a distribution $X\leftarrow Z\rightarrow Y$ which is close both to $P_1$ and $P_2$? Also the argument for Theorem 15 is sketchy. I would appreciate a proof in the appendix. * There seem to be some basic properties that you keep applying without ever mentioning them. For example, if $v$ is a node with in-neighbors $u_1,\ldots,u_k$ then $u_1,\ldots,u_k$ are independent. And then there is a similar fact with out-neighbors and conditional independence. It would be nice to state those facts at least once in the preliminaries. * Similarly, it would have helped me if you reminded me from time to time when you are applying formula (1). * lines 202-204: Am I understanding correctly that the formula you are giving here is the KL divergence between $P=P_{G^*}$ and $P_{\hat{G}}$? If yes, why not write it explicitly? Why are you using the word "essentially" which suggests to me that the formula is not entirely correct? Is there a problem I am not seeing? minor: * I think sometimes you use $d$ as the true maximum in-degree and sometimes as a variable (e.g., Algorithm 3, proof of Lemma 6). I would avoid this. * Similarly, the way you use $\epsilon$ and $\epsilon'$ is confusing. The value of $\epsilon$ in Lemmas 7-10 is what you call $\epsilon'$ in the proof of Theorem 1, correct? If yes, maybe you could give the formula for $\epsilon'$ before you start analyzing the algorithm. * I did not understand much from lines 111-116. In the first sentence, do you mean we can always obtain such a graph for any distribution? I guess you can always take the complete graph, but probably that's not what you mean? * In line 183, can you justify the $O(n^d)$ bound? * line 192, $d$ should be $d^*$? * Line 196-197, do you mean not identified in phase 1, or phases 1 and 2? * Proof of Lemma 21, second and third line after 491, $I(X;Y)$ should be $I(Z;Y)$? * typos line 59 "are", line 94 "denotes", line 186 "has", line 198 "in", double-check the notation in the caption of Figure 3, Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: see above Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Citation in proof of Theorem 1** We agree that a reference for Theorem 1.4 in [BGPV21] should be given here as it is the same proof idea: given a good enough graph, one can apply the parameter learning algorithms referenced in our submission; see line 223. **Lines 225-227** We will add the following elaboration for the inequalities in our paper revision: The first inequality comes from our graph learning algorithm's guarantees while the second inequality comes from performing parameter learning algorithms on the learnt graph referenced on Line 223. The final inequality is implied by these two inequalities. **Proof of Lemma 7, Lines 421-422** You are right, we could just write $u \in S$ instead of $u \in S \cup S'$. We will fix in this in our revision. Thank you for pointing this out. **Typos regarding $\hat{I}$ and $C \cdot \epsilon$** Thank you for pointing this out, we indeed compare $\hat{I}$ with $C \cdot \epsilon$. We will fix these in our revision. **Assumption 11** Assumption 11 is well-defined for any distribution $P$, regardless of whether it is a polytree or what its underlying graph $G^*$ looks like. In the event that $P$ is a polytree, Lemma 12 tells us that Chow-Liu will return the true skeleton $G^*$ whenever Assumption 11 holds for $P$. Note that when we *know* that $P$ is a tree, then Assumption 11 is not even required in the context of PAC learning; see [BGPV21]. **Lemma 14** Lemma 14 is indeed a direct consequence of Lemma 13 via the following contrapositive implication: If one can solve the problem in Lemma 14, then one can use that algorithm to solve the problem in Lemma 13. We will add this clarification in our revision. Regarding your other concern, observe that the orientations $\{X \gets Z \to Y, X \gets Z \gets Y, X \to Z \to Y\}$ are equivalent with respect to Equation (1) and we have analyzed $X \to Z \to Y$ in Lemma 13; in fact, these three orientations belong in the same Markov equivalence class. **Theorem 15** We will add a the following paragraph in the appendix. Consider a distribution $P$ on $n/3$ independent copies of the lower bound construction from Lemma 14, where each copy is indexed by $P_i$ for $i \in \{1, \ldots, n/3\}$. Suppose, for a contradiction, that the algorithm draws $c n/\epsilon$ samples for sufficiently small $c > 0$, and manages to output $Q$ that is $\epsilon$-close to $P$ with probability at least 2/3. From Lemma 14 with error tolerance $\Omega(\epsilon / n)$, we know that each copy is *not* $\Omega(\epsilon / n)$-close with probability at least 1/5. By Chernoff bound, at least $\Omega(n)$ copies are *not* $\Omega(\epsilon / n)$-close with probability at least 2/3. Then, by the tensorization of KL divergence, we see that $d_{\text{KL}} \left( \prod_{i=1}^{n/3} P_i || \prod_{i=1}^{n/3} Q_i \right) = \sum_{i=1}^{n/3} d_{\text{KL}}(P_i || Q_i) > \Omega(\epsilon)$. This is a contradiction to the assumption that the algorithm produces $Q$ that is $\epsilon$-close to $P$ with probability at least 2/3. **Basic properties about polytrees and referencing** Thank you for your suggestion. We will include these common properties for polytrees into the preliminaries and try to reference them whenever we invoke these properties, including formula (1). **Lines 202-204** Your understanding is correct. We will write this more explicitly in the revision and not use the term ``essentially''. **$d$ as maximum in-degree and variable** Thank you for your suggestion. We will make the notation less confusing in our notation. To be precise, we will keep $d^*$ as the maximum in-degree and replace the notation for degree variable $d$ by $\gamma$. **$\epsilon$ and $\epsilon'$** Yes, your understanding is correct. Thank you for your suggestion, we will shift the definition of $\epsilon'=\frac{\epsilon}{2 n \cdot (d^* + 1)}$ upwards from its current position on Line 220 to appear around the time we state Lemma 5, with appropriate signposting. **Discussion on Lines 111-116** You are right that one can always pick $G$ to be a clique in order to satisfy the $\epsilon$-close requirement. However, we are interested in obtaining a graph $G$ that is both $\epsilon$-close and facilitates efficient learning algorithms. For instance, if $P$ was defined on a Bayes net with max in-degree $d$, then we want $G$ to also be a Bayes net with max in-degree $d$. This is always possible in $\exp{(n)}$ time by formulating the search of $G$ as an optimization problem that maximizes the summation of mutual information term (first term in (1)); see [H93] and why it is NP-hard in general. As each mutual information terms can be well-estimated, an $\epsilon$-close graph could be obtained by optimizing over the empirical mutual information scores. We will add a version of this discussion in our revision. **The bound on Line 183** Thank you for catching the mistake, it should be $\mathcal{O}(n^{d^*+1})$: Since $\binom{n}{k} = \frac{n!}{k! (n-k)!} \leq \frac{n^k}{k!}$, we see that $n \cdot \sum_{k = 1}^{d^\*} \binom{n}{k} \leq n \cdot \sum_{k = 1}^{d^\*} \frac{n^k}{k!} \leq n \cdot n^{d^\*} \sum_{k = 1}^{d^\*} \frac{1}{k!} \leq n \cdot n^{d^\*} \cdot e$. That is, we get a bound of $\mathcal{O}(n^{d^\*+1})$. **Typo on Line 192** Thank you, we will correct this. **Lines 196-197** Thank you, we indeed mean ``not identified in both phase 1 and phase 2''. We will correct it in the revision. **Proof of Lemma 21** Thank you for pointing that out. The terms $I(X;Y)$ should indeed by $I(Z;Y)$. We will fix this in the revision. **typos line 59 "are", line 94 "denotes", line 186 "has", line 198 "in", double-check the notation in the caption of Figure 3** Thank you very much for pointing out these mistakes. We will fix them in the revision. **References** [BGPV21] Bhattacharyya, Gayen, Price, Vinodchandran. Near-optimal learning of tree-structured distributions by Chow-Liu. STOC 2021. [H93] Höffgen. Learning and robust learning of product distributions. COLT 1993. --- Rebuttal Comment 1.1: Comment: Thank you for your patient replies to my questions. I also appreciate the explanations about the true skeleton in the main rebuttal. **Proof of Lemma 7** This is the main remaining place where I would appreciate some clarifications. 1) You say "Phase 1 guarantees that there exists a vertex $u\in S$ such that $\hat{I}(...)\le\varepsilon$". I think I see the idea but I don't think I understand it completely. For example, as written I think it is possible that $|S\cup S'|>d^*$. In that case you shouldn't have even computed the $\hat{I}$ during Algorithm 3, or am I confused? 2) What is the importance of iterating in the decreasing set size order in the main loop of Algorithm 3? 3) In line 421 you claim $I(u;S\cup S'\setminus\{u\})=0$. I don't understand why this is true. (But you don't need it anyway, since you are upper bounding, right?) 4) In line 423 you say "Corollary 4 tells us $I(u;S\cup S'\setminus\{u\})<\varepsilon$". Is this a typo and what you mean here is the conditional mutual information? **Statement of Assumption 11** 1) When you say "path ... of length greater than 2", I think you mean that the length is 2 or more, correct? It would be good to clarify this one way or the other. 2) Is there some significance of $3$ in $3\cdot\varepsilon_P$? Can it be replaced with $\varepsilon_P$? --- Reply to Comment 1.1.1: Comment: **Proof of Lemma 7** 1. We apologize for the confusion. One point of confusion could be the dual use of the notation "$S$" in both the algorithm and the lemma. For clarity, we will replace $S$ with $T$ in the algorithm in this response. In Lemma 7, $S$ and $S'$ are defined as $S \subseteq \pi^{un}(v)$ and $S' \subseteq \pi^{in}(v)$, so we are always guaranteed that $|S \cup S'| = |\pi(v)| \leq d^*$. Indeed, our previous rebuttal on the proof of Lemma 7 was not quite accurate, and we would like to further clarify this part. It should be $u \in S \cup S'$ and Phase 1 guarantees that $u \in S \cup S'$ (rather than just $S$). To see why, we need to look at line 5 of Algorithm 3 where we check all subsets $T$ of $\pi(v)$ (as well as some other sets) to see if *every* $u \in T$ satisfies $\hat{I}(u; T \backslash \{u\} | v) \geq C \cdot \varepsilon$. From here, we can see that if a subset $T$ of $\pi(v)$ is not *all* oriented into $v$, then we know that from Algo 3, line 5 that there exists some $u \in T$ such that $\hat{I}(u; T \backslash \{u\} | v) < C \cdot \varepsilon$. Applying this to $T = S \cup S'$, where $S$, the set of unoriented neighboring nodes, is not empty, we have our claim. We will expand on this argument in the proof of Lemma 7 for the revision. 2. Regarding the question about considering subsets in decreasing size: the actual order does not matter, as long as the algorithm considers all possible subsets of neighbors. Here we chose, for convenience and to keep track more easily, to consider them in decreasing size. 3. Line 421: As $u$ is a parent of $v$, it is independent of all other parents $\pi(v)$ of $v$. Since $S \cup S' \setminus \{u\} \subseteq \pi(v)$, we also have that $u$ is independent with those variables, i.e. $I(u; S \cup S' \setminus \{u\}) = 0$. 4. Line 423: Yes, it should be $I(u; S \cup S' \setminus \{u\} \mid v) \leq \varepsilon$. Thanks for noticing this! **Assumption 11** Indeed, we mean to consider u-v paths involving at least one additional vertex, say u-w-v. The "3" is an artifact of the proof and you are right that we could have absorbed that constant into the $\varepsilon_P$ term in the statement of Assumption 11. We will update this in our revision to make it cleaner.
Rebuttal 1: Rebuttal: We thank the reviewer for their time and for providing valuable feedback on our paper. In this global response, we address one of the issues raised by multiple reviewers about the known skeleton assumption, sufficiency of the Assumption 11, and the intuition behind the skeleton-based approach. --- The skeletal assumption is crucial in our algorithm and analysis. Assumption 11 is a useful proxy check for the applicability of our methods. If one believes that the sufficient conditions hold in a dataset of interest, then one can be assured that our theoretical guarantees follow. We are unaware whether Assumption 11 is tight. There may be other ways to recover the true skeleton under other sets of assumptions. Knowing the true skeleton is crucial in our analysis as we need a way to compare the output of our algorithm against the ground truth. This is because the KL divergence of two distributions on a polytree is related to the parent sets; see Equation (1). We found it difficult to design efficient algorithms with provable guarantees without access to the true skeleton. In Section 5, we give information-theoretic lower bounds under the assumption that the true skeleton is known, showing that the problem is non-trivial even under the known skeleton assumption. It is natural to ask whether what we can do with access to a false skeleton that is approximately correct (i.e. has some orientation close in KL to the ground truth) produced by running the Chow-Liu algorithm on the sample statistics. However, it is unclear to us why we can hope to design efficient algorithms with provable guarantees for two reasons: - The Chow-Liu algorithm only uses order-1 mutual information while the KL divergence of Equation (1) requires information from order-$d$ mutual information. It is unclear why one can hope that this false skeleton would yield provable guarantees with respect to Equation (1). - An ``approximately correct'' skeleton may have potentially unknown number of edges in the skeleton being wrong and we do not see how to design efficient global orientation algorithms using only statistics from the ground truth samples. Without the true skeleton, a "local algorithm" (such as ours) can be tricked into some "local optima" and it is hard to argue why the output would obtain "global guarantees" with respect to the parent sets of Equation (1).
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
DP-HyPO: An Adaptive Private Framework for Hyperparameter Optimization
Accept (poster)
Summary: This paper presents DP-HyPO, a pioneering framework for adaptive private hyperparameter optimization. Privacy risks are often neglected in private ML model training. DP-HyPO employs a comprehensive differential privacy analysis to bridge the gap between private and non-private optimization methods (non-adaptive HPO method and adaptive HPO method). The framework's effectiveness is demonstrated through empirical evaluations on a diverse range of real-world and synthetic datasets. DP-HyPO offers a promising solution for enhancing model performance while preserving privacy. Strengths: 1. The paper is written to a high standard and is easy to understand. 2. The availability of provided code facilitates reproducibility, making it a straightforward process. 3. Adaptive hyperparameter optimization algorithms have an important place in practice. DP-HyPO permits the flexible use of non-DP adaptive hyperparameter methods, it is a very meaningful research problem. Weaknesses: I only have one concern regarding this paper, which pertains to the utilization of MNIST datasets that are deemed to be overly simplistic and small in scale. The performance improvement of GP compared to the uniform sample is far less than 1 percent as shown in Figure 1. As mentioned in the conclusion, I would also recommend incorporating a more challenging dataset or a search space with greater complexity in future iterations of your work. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Why a more challenging experimental setup was not used at the very beginning of the experiment? I don't think it would be difficult to find a more appropriate scenario to perform the evaluations. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback! We totally agree that the current experiment settings like MNIST are indeed overly simplistic, and think that could be the underlying cause why our experiments have not shown a significant gain. To address this, we conducted new experiments on CIFAR10 (a much more challenging task), and revealed a **substantial disparity** between our proposed approach and the baseline. Please refer to our "general" rebuttal for more detail. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifying comments. It gives me further confidence to recommend that the paper is accepted. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer's carefully reading our rebuttal. Thanks again for the valuable feedback!
Summary: This paper introduces DP-HyPO, a framework for “adaptive” private hyperparameter optimization, aiming to bridge the gap between private and non-private hyperparameter optimization, that can allow practitioners to adapt to previous runs and focus on potentially superior hyperparameters. Besides, the arbitrary adaptive sampling distributions based on previous runs are allowed without any stability assumptions. The paper proposes experimental analysis to show the proposed methods' strengths. Strengths: 1. The paper addresses the problem of hyperparam optimization, which is an important problem. 2. The paper proposes an interesting optimization framework, namely, DP-HyPO, which enables adaptive parameter selection under privacy constraints. 3. The paper provides sharp DP guarantees for adaptive private hyperparameter optimization. Weaknesses: 1. The experiments are not abundant, further evaluation should be listed. There are only two sections related to the experiments which I do not think can reveal the strengths of the proposed method. Additional in-deep analysis should be conducted. 2. In the experimental part, some related works are not compared. The author may want to add more related baselines listed in the Section of Related Works to show the efficiency. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. What is the purpose of Figure 1? I want to know the details of experimental settings. Maybe I have lost some important information. 2. What is the performance of other baselines? As the author states, some other baselines can be used for hyperparameter optimization. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The paper studies an important problem. However, the experimental section is unsatisfying and additional experimental analysis should be added. Besides, the author may want to clearly state the experimental settings. Additionally, some non-adaptive methods should be compared such as Grid Search, random search (RS), and Bayesian optimization (BO). Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for valuable feedback. While we respectfully acknowledge the input, we do have different viewpoints on certain aspects raised by the reviewer. **Empirical demonstration is important**: While our intention is to provide a rigorously theoretically-backed privacy accounting framework for HPO with DP consideration, we do believe a strong empirical demonstration is important. We acknowledge that the current experiment settings like MNIST are indeed overly simplistic, and think that could be the underlying cause why our experiments have not shown a significant gain. To address this, we conducted new experiments on CIFAR10 (a much more challenging task), and revealed a **substantial disparity** between our proposed approach and the baseline. Please refer to our "general" rebuttal for more detail. **Comparison to other related methods**: HPO with DP guarantee is an under-explored problem, and there are very limited related works. The simple applications of methods like Grid Search and Random Search will result in the privacy cost growing linearly with the number of hyperparameter trials (proved in Liu & Talwar, 2019). Liu & Talwar (2019) and Papernot & Steinke (2021) proposed the uniform method (our baseline), and this is the only known algorithm whose DP guarantee is independent of the number of runs. Therefore, those methods (Grid search and Random search) are definitely undesirable when the hyperparameter space is non-trivial, which is the main focus of our paper. This is the reason both us and the other two papers omit such comparison. We will make this more clear in the updated version. **The purpose of Figure 1**: Left panel of Figure 1 demonstrates the performance comparison of GP-instantiated DP-HyPO and the Uniform Method on MNIST; Right panel of Figure 1 demonstrates the same comparison on a real federated learning task with synthetically fitted loss landscape. We provided a detailed description in Appendix E of the original submission. Reference: [1] Nicolas Papernot and Thomas Steinke. Hyperparameter tuning with renyi differential privacy. In International Conference on Learning Representations, 2021. [2] Jingcheng Liu and Kunal Talwar. Private selection from private candidates. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, pages 298–309, 2019. [3] Shubhankar Mohapatra, et al. "The role of adaptive optimizers for honest private hyperparameter selection." Proceedings of the aaai conference on artificial intelligence. Vol. 36. No. 7. 2022. --- Rebuttal 2: Title: please respond to authors' rebuttal Comment: Dear reviewer, The authors have responded to your review. Please read it and respond to them. Please note that other reviewers had different evaluations of this work. Have a look at their comments and see if it changes your opinion about this work. Thank you advance, Your AC
Summary: Hyperparameter optimization (HPO) is an important step for enhancing the performance of private model training methods such as DP-SGD. Currently, most advanced HPO algorithms (e.g., Bayesian optimization) require to adaptively select the hyperparameters, but existing private HPO methods are non-adaptive. To fill in this gap, this paper proposed a differentially private adaptive HPO framework (DP-HyPO) which enables the conversion of any non-private HPO algorithm into a private one. The proposed DP-HyPO is shown to achieve a theoretical DP guarantee and outperform the non-adaptive DP-HPO baselines. Strengths: This work has tackled an important problem of private HPO. It contributes some new ideas for converting the non-DP adaptive HPO algorithm into a DP one without incurring substantial privacy cost. The proposed Framework 1 is simple but has good generalization ability. Weaknesses: 1. The presentation of this paper needs to be improved. To be specific, * The paper doesn't flow well. Some important information (e.g., definition of RDP) is referred to the appendix and other literature. I suggest to re-organize the paper, moving some less important information (e.g., Algorithm 2) into the appendix and including the key definition/concept in the main paper. * Only showing the equations of the theoretical results is not enough, they need to be deeply analyzed for supporting the claims. For example, why is the base algorithm required to satisfy two RDP? Is this assumption realistic? Given the result of $\varepsilon'$, what's the insight for each term in it and why is it considered as a "sharp DP guarantee (Line 79)"? 2. This work has highlighted several times that handling infinite hyperparameter space is one of the key contributions. However, it is not clear how this is achieved. In Algorithm 3, the update rule of $\pi^{t+1}$ seems to be intractable for infinite $\Lambda$. In the experiments (Line 294), the hyperparameters are still discretized for the GP method. 3. Other minor issues: * $\mathcal{T}$ and T are used inconsistently in the paper. * The reference format needs to be carefully checked. The up-to-date paper should be cited instead of the arXiv version. For example, [20] has been published in ICLR. Some conference/paper titles should be capitalized (e.g., aaai -> AAAI, Dp-raft-> DP-RAFT, etc). * The DP Gaussian process and DP Bayesian optimization works are related but not discussed. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Line 203-204: It is claimed that "one can easily adapt the proof to other probability families?". This is a strong statement without any justification. Can you provide any example? 2. How is the constraint in Section 3.2 related to that in Framework 1? Is the method proposed in Section 3.2 only work when $\pi^{0}$ is uniform? 3. In Section 4.2.1, how is the budget $\varepsilon$ selected? How is the tradeoff between adaptivity and privacy (Lines 220-223) demonstrated in the empirical results? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The limitations of the simple empirical settings are discussed but left to future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for valuable feedback. **Structure and flow**: We will restructure the paper to be of a better flow. **Advantage and practicality of Theorem 1**: The statement of Theorem 1 involves two RDP guarantees that **can be the same**, that is, we can have $\alpha = \hat{\alpha}$, and $\epsilon = \hat{\epsilon}$. Our results can improve the final privacy guarantee by optimizing the choice of $\alpha, \hat{\alpha}, \epsilon, \hat{\epsilon}$, but does not require the base algorithm to satisfy two different RDP guarantees. We believe this is realistic and provides more freedom for practitioners to obtain a better and sharp DP guarantee if the base algorithms happen to have more than one RDP guarantee. More importantly, most common algorithms in DP ML satisfy $(\alpha, \epsilon)$-RDP for any $\alpha>1$, including DP-SGD. This Theorem generalizes a similar theorem that is “sharp” for the Uniform method in Papernot and Steinke (2021) which relies on exactly the same assumptions. We will add more discussions in our next version. **Infinite hyperparameter space**: Theoretically, our framework allows the hyperparameter space $\Lambda$ to be of any topological structure, including being an infinite set with some continuous structure like a subset of $\mathbb{R}^p$. One benefit of this perspective is that it allows us to adaptively and iteratively choose proper discretization of the space as we need. Meanwhile, as pointed out by the reviewer, empirically we require discretization to make the convex optimization solvable. However, as compared to the previous work, we can tolerate a much finer level of discretization since the performance of the Uniform algorithm degrades a lot with the number of candidates increasing. We will make this point more clear in the updated version. **Adoption to other probability families**: The crucial observation here is that our results depend on the probability through its probability generating functions as in Lemma A.5 on line 513 in appendix. By using the corresponding pgf in the proof for the derivations in line 530, one can generalize our proof to other probability families. There are other probability families that are proved in Papernot & Steinke (2021), which we believe is easy to generalize as the pgfs are already computed and used in their proofs for uniform method. Although the idea is straightforward, it is loose to say this would be “easy” for an arbitrary family. We will revise this statement and make it strict in our next version. **General priors for DP-HyPO**: Our framework also works for the general priors other than uniform distribution. We spend two separate sections in Appendix C and D discussing this point. We limit the prior to be uniform distribution mostly due to its mathematical simplicity, but other general priors are also considered in our framework. **Selection of $\epsilon$**: In Section 4.2.1, the privacy budgets for both the GP-instantiated DP-HyPO and Uniform are selected to make a fair comparison between the two methods. In line 220-223, we discussed that a higher value of $\frac{C}{c}$ means more privacy budget spending on adaptivity. We demonstrate the effect of this trade-off mostly in Section 4.2.2, where we vary the value of $C$ (and set $c = \frac{1}{C}$). The detailed results can be found in Table 2. Our selection of $\epsilon$ for the base algorithms is very typical in the DP literature. For example, it’s similar to the selection in Abadi, et al. (2016). **Reference**: [1] Nicolas Papernot and Thomas Steinke. Hyperparameter tuning with renyi differential privacy. In International Conference on Learning Representations, 2021. [2] Jingcheng Liu and Kunal Talwar. Private selection from private candidates. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, pages 298–309, 2019. [3] Martin Abadi, et al. "Deep learning with differential privacy." In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security. 2016. --- Rebuttal Comment 1.1: Comment: I appreciate the authors for the detailed responses which have addressed most of my concerns. Therefore, I increase my score to 5 and hope the authors can revise the paper according to the responses, especially for the paper organization and clarifications of the theoretical results/claims. --- Reply to Comment 1.1.1: Title: Thanks! Comment: We appreciate the reviewer's carefully reading our rebuttal. Will improve the paper according to your advice.
Summary: In the paper "DP-HyPO: An Adaptive Private Framework for Hyperparameter Optimization" the authors propose a framework for differential privacy HPO turning adaptive aka. model-based HPO methods into privacy preserving HPO methods. In their empirical study, the authors demonstrate that despite restrictions and cuts for achieving DP, the adaptive HPO method involving a GP is still performing consistently better compared to uniformly sampling. Strengths: + Flexible framework that allows to turn commonly used model-based HPO methods into DP-HPO methods + Improvement over uniform baseline + Theoretically guarantees for maintaining DP Weaknesses: - I am not an expert in DP but I think that privacy concerns should definitely be considered also in HPO. However, it seems like the framework is mostly centered around deep learning methods. At least there are certain assumptions for the guarantees to hold, e.g., that an ML algorithm run can be repeated to achieve a certain degree of privacy and also the experiments are solely limited to deep learning scenarios. As HPO methods are typically used in a much broader sense, it is therefore questionable to what extent the proposed framework also generalizes to other learning algorithms/models. - The common HPO community might not be familiar with the term "privacy cost" and it could make sense to at least briefly explain what the auhtors mean by privacy cost. # Minor l. 82: "gurantees" l. 168: "bridges" l. 198 "proportion" l. 227 "update" l. 228f the sentence is broken, maybe a word missing? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - What about deterministic learning algorithms? Is DP also preserved via the framework? - Could not model-free HPO methods like Hyperband be used to maintain differential privacy? Since Hyperband is a relatively strong HPO method, this could potentially be another argument to stick to such approaches without using a model. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: A discussion of limitations is missing in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback! **For general ML problems**: We clarify that our results hold for much broader HPO problems beyond deep learning. The requirement of “an ML algorithm run can be repeated to achieve a certain degree of privacy” is not an assumption of our framework. Although our framework runs an iterative meta-algorithm that has a similar structure like deep learning, we do not limit the base algorithms to be deep learning. Actually, our framework is beneficial to any DP algorithm which has hyperparameters. For example, the sparse vector technique and propose-test-release (Cynthia & Roth, 2014) are two fundamental DP algorithms whose performance largely depends on the choice of hyperparameters. **Minors**: We appreciate the reviewer's careful reading of our paper. We will correct all those good catches in the updated version. We will also make the meaning of private cost more clear. **DP and deterministic algorithms**: The notion of differential privacy (Dwork et al., 2006) is a probabilistic guarantee, which inherently requires the algorithm to be randomized. That being said, any nontrivial deterministic algorithm does not satisfy DP. **DP-HyPO with Hyperband**: Thanks for the advice. As we discussed in the paper, Gaussian Process is only one instantiation of the framework. Hyperband, as a bandit-based approach to HPO, should be suited with our DP-HyPO framework. We will list exploring this method as a future direction. [1] Dwork, Cynthia, and Aaron Roth. "The algorithmic foundations of differential privacy." Foundations and Trends® in Theoretical Computer Science 9.3–4 (2014): 211-407 [2] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to 375 sensitivity in private data analysis. In Theory of Cryptography: Third Theory of Cryptography 376 Conference, TCC 2006, New York, NY, USA, March 4-7, 2006. Proceedings 3, pages 265–284. 377 Springer, 2006. --- Rebuttal 2: Title: Respond to authors Comment: Dear reviewer, The authors have responded to your comments. Please read their rebuttal and respond to them with whether their comments addressed your concerns. Thank you in advance, Your AC
Rebuttal 1: Rebuttal: We acknowledge that the current experiment settings like MNIST are indeed overly simplistic, and think that could be the underlying cause why our experiments have not shown a significant gain. To address this, we conducted new experiments on CIFAR10 (a much more challenging task), and revealed a substantial disparity (>2% performance improvement) between our proposed approach and the baseline. Please refer to our PDF for new experiment results. Pdf: /pdf/02269128ca37f19ccbcb591ac40abcde3de0e607.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work proposes a differentially private (DP) adaptive hyperparameter optimization algorithm called DP-HyPO, which encompasses several existing DP non-adaptive hyperparameter optimization algorithms. DP-HyPO is able to deal with hyperparameters that come with infinitely many values and by leveraging the previous output from the base algorithm, DP-HyPO outperforms the non-adaptive algorithms. The paper gives a privacy analysis of the proposed DP-HyPO and showcases two practical applications of private hyperparameter tuning by instantiating the DP-HyPO framework with Gaussian Process (GP). Strengths: Clear and intriguing presentation. The problem and several design choices are well motivated. New theoretical result analyzing the privacy loss of an adaptive private hyperparameter optimization algorithm. Weaknesses: Experiment results are not appealing enough. They do not suggest a significant advantage of DP-HyPO compared to the existing non-adaptive algorithms. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: How can one interpret $\hat{\pi}^{(0)} = \pi^{(0)} \cdot \mu(\Lambda)$ in Framework 1, which is a product of two distributions? What is the run time of performing the projection in Eq.(3.1)? Since one needs to find the optimal function in $S_{C, c}$ at every iteration of DP-HyPO, a long computation time might limit the practicality of DP-HyPO. In Section 4, is it possible to derive convergence guarantees (i.e., the utility of the private algorithm) for DP-HyPO with Gaussian Process? In Section 4.2.1 MINST simulation, how does DP-HyPO perform in the low privacy regime, say, each base algorithm has a privacy loss $\epsilon = 0.1$ (which is also practical), and the total privacy loss of DP-HyPO needs to be $\epsilon = 1$? In Section 4.2.2 applying DP-HyPO to federated learning, what is the privacy loss $\epsilon$ here? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Some limitations of this work were discussed in the conclusion. Broader impact was not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback! **MNIST is simplistic**: We acknowledge that the current experiment settings like MNIST are indeed overly simplistic, and hypothesize that could be the underlying cause why our experiments have not shown a significant gain. To address this, we conducted new experiments on CIFAR10, and revealed a **substantial disparity** between our proposed approach and the baseline. Please refer to our "general" rebuttal for more detail. **Definition of $\mu(\Lambda)$**: Here, $\mu(\Lambda)$ is the total measure of the hyperparameter space, which is a real number, and it is a scalar multiplication. We will make this clearer in the writing. **Run time of projection in Eq (3.1)**: The projection is a convex optimization problem over the space of $\Lambda$. We treat this as an abstract and separate subproblem in the theoretical part of the paper, as it requires the specification and the discretization of $\Lambda$. In our experiment part, we use CVX to solve this convex problem and the runtime of any discretization is negligible compared to the base algorithm. **The utility guarantee of DP-HyPO with Gaussian Process**: We agree this is a very important problem. However, it is also hard to obtain meaningful utility guarantees. In the previous paper by Liu & Talwar (2019), they provided several utility guarantees for different instantiations of the uniform method. The only relevant result to our algorithm is Theorem 3.3 therein, which proves the the (iterated) uniform method can achieve at least as good as one-iteration of uniform selection with high probability. In the other paper by Papernot & Steinke (2021), no utility result is presented. We believe useful utility results of DP-HyPO are even more challenging than results of uniform methods, and the specification to Gaussian Process requires in-depth analysis of the selection as well as the GP itself. We will list it as an important future direction. **The performance for MNIST when $\epsilon = 0.1$**: This is actually the high privacy regime. Training deep learning model from scratch with this small privacy budget is super challenging. Empirically, $\epsilon$ should be at least 2 to have meaningfully performance for deep learning tasks (for example, De et al. (2023) shows a state-of-art accuracy of 60% for CIFAR 10 when $\epsilon = 1$, as compared to >95% in the non-private case). **Section 4.2.2**: In this section, we present a synthetic experiment where we use the **same** loss landscape from a practical federated learning task (Figure 2) for both DP-HyPO and uniform method. This reflects the scenario where DP-Hypo has a larger privacy budget than the uniform algorithm. In this case, we experiment on different choices of $C$ to see the benefit of using the extra privacy budget for better adaptivity. The results in Table 2 demonstrate a larger $C$ means a lower loss. **Reference**: [1] Jingcheng Liu and Kunal Talwar. Private selection from private candidates. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, pages 298–309, 2019. [2] Nicolas Papernot and Thomas Steinke. Hyperparameter tuning with renyi differential privacy. In International Conference on Learning Representations, 2021. [3] De, Soham, et al. "Unlocking high-accuracy differentially private image classification through scale." arXiv preprint arXiv:2204.13650 (2022). --- Rebuttal Comment 1.1: Comment: Thank you very much for the detailed response!
null
null
null
null
null
null
From Pixels to UI Actions: Learning to Follow Instructions via Graphical User Interfaces
Accept (spotlight)
Summary: The paper describes a system to perform pixel-based interaction with human-taylored screens based on GUIs. The proposed system relies on a transformation to a text-based, structured representation of the screen and the screen tasks performed using a larger, pre-trained model of almost 300M parameters. The proposed system achieves performance similar to human beings performing those tasks and SOTA using text representations of the screen. Strengths: The main strengths of the paper are: 1. It demonstrates a successful system able to interact with the screen (web-based applications) in a similar way that humans do. 2. It does so by fine-tuning a pre-trained system with a small number of demonstrations. 3. It performs an ablation test where it shows that the pre-training information is critical. Weaknesses: The main weaknesses of the paper are: 1. The evaluation procedure is so succinctly described that it is hard to understand what was exactly done. 2. The granularity of number of tests according to test type seems to be too coarse (500 tests for 59 task is less than 10 per task). 3. Results are report with little extra information such as mean scores. 4. No statistical significance analysis is performed justifying the claims that the scores are better than SOTA or human (therefore here I assume they are similar, not better). 5. The discussion section is very limited and not clear about the actual contributions. 6. The authors fail to discuss important ethical issues concerning their research. In particular, if AI systems can interact using GUI interfaces, they can produce content and affect web-based systems in very impactful ways, including altering political and social discourses, create hard-to-identify denial attacks, fool "I am not a robot" tests, and pose as people while interacting with real people. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Are the results of the proposed system statistically better than the other systems? With what level of confidence? 2. Can you provide the standard deviations of the tests performed? Are they adequate? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The limitation section is very brief and not adequate for a paper with the ethical impacts that such technologies can create. Flag For Ethics Review: ['Ethics review needed: Inappropriate Potential Applications & Impact (e.g., human rights concerns)'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! > Ethical considerations We agree there are important considerations for responsibly developing and deploying models that can interact with websites. While we attempted to identify some of these concerns (e.g. breaking CAPTCHAs) in the “Broader Impact” subsection of Section 7, you have raised several important additional concerns. We will expand the discussion of these issues, and potential mitigations, in our revised paper. See also our response to Ethics Reviewer Vxcz and Ethics Reviewer n367. > “The evaluation procedure is so succinctly described that it is hard to understand what was exactly done. The granularity of number of tests according to test type seems to be too coarse (500 tests for 59 task is less than 10 per task).“ Please note that there are two paragraphs that discuss evaluation separately for MiniWob++ (Section 4.1) and WebShop (Section 4.2). For MiniWob++, we evaluate on 100 random instances *per task*, for a total of 5900 instances. For WebShop, we follow the standard procedure of evaluating on the 500 test instances. In both cases, our evaluations are consistent and comparable with prior work. We will try to provide additional context for readers unfamiliar with MiniWob++ and WebShop, for our revised paper. > “Results are report with little extra information such as mean scores.” We report mean scores across test instances as the primary metric, as described in the “Evaluation” paragraphs of sections 5.1 and 5.2. We also provide more detailed per-task scores in Table 5 of Appendix C.2. > Statistical significance analysis For MiniWob++, we estimated variance by computing the mean score for 3 different sets of random seeds for generating test instances. This yielded mean scores of 96.2, 96.4, and 96.1; the standard deviation across these trials was 0.15. These results will be added to Appendix C for our revised paper. The key comparison for Pix2Act on our proposed setting for MiniWob++ is with CCNet without DOM information, where Pix2Act outperforms with a mean score of 96.2 compared to 24.1. --- Rebuttal Comment 1.1: Comment: Thanks to the authors. I confirm I have read the rebuttal.
Summary: This paper investigates to create agents that interact with the UIs based on pixel-based screenshots and a generic action space corresponding to keyboard and mouse actions. Extensive experiments on simulated environments, i.e., MiniWob++, WebShop, evaluate the effectiveness of proposed agents, show the benefits of pretraining, and demonstrate the successful application of tree search. Strengths: This paper builds a pixel-based agent for instruction-following interaction with UIs. This is an interesting and valuable research topic of high bussiness impacts as well for building AI-based UI interaction assistants. The proposed methods are reasonable, and the conducted experiments are clearly presented. The corresponding analysis and discussion for limitations are comprehensive and sufficient. Weaknesses: The biggest problem of this work is about the novelty of its proposed method. It's kind of difficult for readers to catch up with the core differences compared with the existing pixel-based agents, e.g., Pix2Act, and understraning its advantages. The authors should provide mode insight by highlighting the novelty of proposed methods and clarifying the rationale behind its advantages. As for the application of tree search, it is widely seen in RL-based agent building. The corresponding introduction in this works, including its motivation, formulation and benefits, lacks of a detailed and clear statement. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. What are the key differences between this work and the existing pixel-only UI agents and tree search based RL agents in terms of the methodology? 2. What are the motivations, detailed formulation and beneits of applying tree search here? Are there anything special for completing UI tasks? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Pls kindly see the contents of weakness and questions as above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! > “The biggest problem of this work is about the novelty of its proposed method. It's kind of difficult for readers to catch up with the core differences compared with the existing pixel-based agents, e.g., Pix2Act, and understraning its advantages.” We are confused by this comment. This paper introduces Pix2Act. Pix2Act is not an approach from prior work. The only comparable pixel-based agent from prior work for the tasks that we study was a version of CCNet proposed by Humphreys et al. 2022. We discuss CCNet in the introduction, and offer several empirical comparisons in Section 5. One of the key differences is that Pix2Act is pre-trained on web-scale data, which we find demonstrates significant transfer to the instruction following tasks that we study. Pix2Act significantly outperforms the pixel-only version of CCNet on MiniWob++ with a mean score of 96.2 vs. 24.1 (Figure 3). Perhaps you are interested in the differences between Pix2Act and Pix2Struct? The underlying image-to-text model for Pix2Act is initialized from the pre-trained Pix2Struct model of Lee et al. 2023, but Pix2Struct image-to-text models have not previously been used to develop pixel-based agents. We will attempt to clarify this relationship in the introduction of Section 3. > “As for the application of tree search, it is widely seen in RL-based agent building.” “What are the motivations, detailed formulation and beneits of applying tree search here? Are there anything special for completing UI tasks?” We agree that tree search has been used by prior work in the context of games and other environments, and we did not intend to claim any conceptual novelty in the MCTS algorithm that we apply. Indeed, we adopted MCTS because it is a well-studied algorithm and “has been successfully integrated with neural network policies in prior work” (Section 3.1). We will provide some additional context of prior work related to tree search for policy improvement to our revised paper. The formulation of our MCTS implementation, including integration with policy and value networks, is detailed in Appendix B. We hope the success of tree search on the tasks that we study will inspire future work to consider related methods. --- Rebuttal Comment 1.1: Title: Thanks for your response Comment: Thanks for your response and sorry for the confusion. I made a typo there. The related work I would like to list is Seq2Act [1] rather than Pix2Act. Besides, there is another work for developing pixel-based agents, named Spotlight [2]. Please clarify the relations and differences compared to them. [1] Li, Yang, et al. "Mapping natural language instructions to mobile UI action sequences." arXiv preprint arXiv:2005.03776 (2020). [2] Li, Gang, and Yang Li. "Spotlight: Mobile UI understanding using vision-language models with a focus." (2023). --- Reply to Comment 1.1.1: Comment: Thank you for clarifying your comment. The key difference between Pix2Act and the Seq2Act model of Li et al. (2020) is that Seq2Act is *not pixel-based*. Seq2Act represents screens using text-based information corresponding to the Android view hierarchy (which is similar to DOM information for web pages). Li et al. (2020) only briefly discuss the possibility of using pixel-based inputs in the future, mentioning that “while it is possible to directly use screen visual data for grounding, detecting UI objects from raw pixels is nontrivial”. In contrast, in our paper, we focus on agents which do not have access to or rely on structured representations such as the Android View Hierarchy, but instead rely on pixel-based representations of the input screen. The Spotlight paper of Li et al. (2023) *does not present an agent that interacts with an environment*. Their paper focuses on various supervised learning benchmarks related to user interfaces, similarly to Pix2Struct (albeit with a narrower focus in terms of tasks). Spotlight performs comparably to Pix2Struct (141.8 vs 136.7 on Widget Captioning, and 106.7 vs 109.4 on Screen2Words) on the tasks on which both models were evaluated. Therefore, Spotlight could potentially serve as an alternative underlying pre-trained model for an agent such as Pix2Act instead of Pix2Struct. We hope this clarifies the relationship between Pix2Act and these other prior works. We briefly mention the Seq2Act paper in the related work section of our submission, but we will expand our discussion of both of these two papers in our revised paper. Please let us know if there is anything else we can provide or clarify. We are happy to see the strengths mentioned in your original review, and hope we have addressed some of the weaknesses.
Summary: One of the main goals of intelligent agents is to interact with the internet in the same way that humans do. Prior state-of-the-art models relied on both the DOM structure and the graphical user interface (GUI) to achieve good performance on web browsing tasks that involve following instructions. In this paper, the authors propose a model that relies solely on pixel-based screenshots as input. The model then selects actions that correspond to basic mouse and keyboard operations. The authors use a pre-trained model called Pix2Struct and a standard behavior cloning and reinforcement learning (RL) framework to improve the model's ability to follow instructions and perform various web tasks. The authors demonstrate that their pixel-based approach, which does not require access to DOM-related information, can achieve competitive performance on the MiniWob++ benchmark. Additionally, they show through ablation studies that the performance gains can be primarily attributed to the pre-training of Pix2Struct. **Update after rebuttal** I confirm that I have read the rebuttal and other reviewers feedback. Authors have addressed my concerns. Strengths: - Teaching agents to interact with the web is an important research direction. The experiments presented in this paper provide valuable insights to the community. In particular, the authors demonstrate that large-scale pre-training on image-to-text tasks, such as Pix2Struct, can reduce the need to explicitly provide DOM information to the model. Additionally, the task transfer ablation study shows that the model is able to generalize to other tasks. These findings will help to shape the future research direction on this topic. - Paper is well written and is easy to follow. For MiniWob++ benchmark, authors performed sufficient ablation studies to quantify gains due to individual components i.e. pre-training, behavior cloning. Weaknesses: - It's not clear if authors have tuned all baselines correctly. In particular, authors show that their proposed RL data bootstrapping method leads to better performance than Behavior Cloning. If I understood the experiments correctly, they have not trained the model for the same number of steps so it's very likely that behavior cloning baseline is potentially under-trained. - There are multiple important design discussions that are added to the text without any particular justification. For example, authors fine-tune model on MiniWoB++ data for WebShop experiment but don't use WebShop data for MiniWob++ experiment. There is no justification for such design choices. Further, authors provide different learning rate and number of steps for various experiments without justifying these design choices. - Authors present very sparse results and discussion on WebShop dataset and all experiments are mainly conducted on MiniWoB++ dataset. As a reader, it feels like that authors added that dataset mainly for the sake of adding it. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Hyperparameter on Webshop data: learning rate and optimizer details. If these are similar to MiniWoB++, please add a note - Choose-date: what kind of mistakes model make ? How does RL sampled data help with poor performing tasks? - Fig 3: for how many steps did you train Ours (BC only) vs Ours ? In particular, does BC training for additional steps help in improving BC baseline? - Authors suggest that intermediate fine-tuning on MiniWoB++ helps for WebShop tasks. Do they observe similar gains while using WebShop data for MiniWoB++ task? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Authors have addressed major limitations of their work and have also provided potential misuse of their research. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! > WebShop details and hyperparameters While some prior work (e.g. Humphreys et al. 2022) evaluated only on MiniWob, we believe it was important to also evaluate on WebShop to better understand the generality and limitations of our approach. We will provide some additional details, discussion, and error analysis for WebShop as space allows in our revised paper. Thank you for pointing out the missing optimizer hyperparameters for WebShop. We indeed use the same optimizer and learning rate as for MiniWob++. We will clarify this in our revised paper in section 5.1. > Tuning of BC-only model We did tune the BC-only model to determine the optimal number of training steps. With 26K training steps on MiniWob++ the model completes multiple epochs over the human demonstrations. Training for more steps on the human demonstrations does not improve performance, and can reduce performance due to overfitting. Notably, our BC-only result is also strong even relative to prior work that uses DOM information (66.5 vs. 38.7 for Humphreys et al. 2022). Finally, the finding that leveraging environment interaction improves performance beyond BC on MiniWob is consistent with prior work that has applied RL methods to MiniWob++ (e.g. Humphreys et al. 2022). > Using MiniWob data for WebShop and vice versa The improvement from using MiniWob data for WebShop is 4.0 points in task score (the relevant ablation is mentioned in section 5.4). While perhaps not a key result in the paper, we thought this improvement was significant enough to report. We did not observe any improvement from using WebShop data for MiniWob. Notably, WebShop has far fewer human demonstrations (1.5K vs. ~15K per MiniWob task), which may explain this asymmetry. > Error analysis for “choose-date” This task requires using a calendar interface to select a specific date. The calendar may be initialized to a month (e.g. June) that is far from the target month (e.g. December). The agent must navigate through the calendar one month at a time. A common error is that the model attempts to navigate the calendar in the wrong direction. Perhaps this implies a lack of knowledge related to the ordering of months, or an inability to robustly apply this knowledge in this context. > How does RL sampled data help with poor performing tasks? The tree search algorithm can find examples of successful trajectories where the current policy would otherwise fail. Therefore, training on these trajectories can improve performance. --- Rebuttal Comment 1.1: Title: Reviewer response Comment: Thanks for answering my questions. Please revise the paper by incorporating additional experiments details. --- Reply to Comment 1.1.1: Comment: Thank you again for your review! We will incorporate additional experimental details in our revised paper.
Summary: This paper presents PIX2ACT, a method that interacts with GUIs using pixel-level visual representations and generic low-level actions, emulating how humans interact with these interfaces. Unlike previous approaches, it doesn't rely on structured text-based data sources, but rather processes pixel-based screenshots, circumventing issues associated with obfuscation or misalignment in structured application data. The model showed improved performance compared to human crowdworkers on the MiniWob++ benchmark for GUI-based instruction following tasks. Strengths: The paper introduces a novel approach to the challenge of automated GUI interactions, using pixel-based screenshots as opposed to relying on text-based representations. The PIX2ACT model was tested on two benchmark datasets, MiniWob++ and WebShop, which ensured a robust and varied testing process. The paper is clearly written and offers substantial and important contributions, namely the ability to build an agent that can outperform humans in task completion using pixel-based inputs and a generic action space. Their findings indicate that the pre-training of PIX2STRUCT via screenshot parsing is effective for GUI-based instruction following with pixel-based inputs. Weaknesses: The performance on the WebShop benchmark is still significantly below larger language models using HTML-based inputs and task-specific actions. Since the method uses tree search, it seems to rely on offline environments. It’s not clear if this approach will be useful in real-world online environments, although I could see this working with perhaps re-settable virtual environments. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Would the model maintain high performance in real-time environments, given the training was conducted in offline environments? How much of the performance difference is due to the size of the model? Can this method be easily adapted to take advantage of existing larger pretrained models? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The tree search approach relies on the ability to generate new environment and instruction variations and receive reward signals, which might not be feasible in some real-world applications. The paper does not provide a clear solution for the performance gap on complex tasks and environments, like the WebShop benchmark. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! > Limitations of tree search in real-world environments We tried to address some of the tree search limitations you mentioned in section 7, as well as some potential directions towards applying such an approach more broadly (e.g. generative models of potential instructions and approximate reward models). We will add that our tree search method also requires environments with the ability to reload an initial state, which we agree may not hold in all real-world environments. That said, there are also additional considerations (e.g. security) that may discourage training-time exploration beyond controlled environments in practice. > Would the model maintain high performance in real-time environments, given the training was conducted in offline environments? While MiniWob++ and WebShop collectively evaluate a range of capabilities, we hope increasing interest in the community to develop more realistic benchmarks will enable better studying this question in future work. > How much of the performance difference is due to the size of the model? Can this method be easily adapted to take advantage of existing larger pretrained models? We have only evaluated Base Pix2Struct models (282M parameters) in this paper. However, conceptually, our method relies only on a pre-trained model implementing a generic image-to-text interface. Therefore, exploring larger pre-trained models is a great direction for future work, especially as such multimodal models become increasingly available. --- Rebuttal Comment 1.1: Title: Reviewer Response Comment: Thanks for clarifying these questions. After reading the other reviews I keep my suggestion to accept.
Rebuttal 1: Rebuttal: We would like to thank all of the reviewers for their comments and suggestions! We tried to address any questions in the individual responses. We would also like to respond to the ethical reviewers, as we did not see a way to respond individually to ethics reviews. __Ethics Reviewer Vxcz__ Thank you for your review! > Human Demonstrations We have had previous correspondence with the authors of Humphreys et al. (ICML 2022) and Yao et al. (Neurips 2022). Our use of both datasets is consistent with the permission granted by the authors. We can also confirm that neither dataset contains PII. We will work with both groups of authors to confirm further details of the annotation process. We agree that datasets that incorporate a more diverse set of instructions and example demonstrations, especially those that reflect the needs of diverse users with different abilities, would be a valuable contribution for future work, and will mention this in our revised paper. > Broader Impacts We agree there are important considerations for responsibly developing and deploying models that can interact with websites. While we attempted to identify some of these concerns in the “Broader Impact” subsection of Section 7, you have raised several important additional concerns. We will expand the discussion of these issues, and potential mitigations, in our revised paper draft. Taking your comments into consideration, below are some of our thoughts on these issues. In this paper we have trained and evaluated models only in offline environments. Responsibly deploying models in an environment where they can interact with online services would require additional considerations. Prior to enabling a model to access a new service, it would be important to sufficiently verify and/or constrain the behavior of the model to ensure that it is consistent with the terms-of-service for that service and does not otherwise cause harm. There would be many potential risks associated with deploying models that could interact with services in violation of their terms-of-service or otherwise engage in various forms of spam, fraud, or abuse. Examples of such behavior could include impersonating human users, generating harmful content or spam, or engaging in denial-of-service attacks. Models that use the same conceptual interface humans use could potentially be more capable of breaking security defenses (e.g. solving CAPTCHAs) or engaging in forms of spam, fraud, or abuse that are more difficult to detect. It is therefore important for research related to security and techniques for detecting spam, fraud, and abuse to take such potential uses into account. __Ethics Reviewer n367__ Thank you for your review! We will elaborate on the ethical considerations related to security, and possible mitigations, in our revised paper. Relevant security research would include work on CAPTCHAs. For example, Section 3.4 “Attacks against Behavior-based CAPTCHA” of https://arxiv.org/abs/2103.01748 discusses CAPTCHA attacks by bots imitating human behavior, and the authors also discuss potential mitigations. Please also see our response to Ethics Reviewer Vxcz for a broader discussion of other ethical considerations.
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper introduces PIX2ACT, a model designed to interact with GUIs using only pixel-level visual representations. Unlike most prior works that depend on structured interfaces (like HTML or DOM trees), PIX2ACT relies solely on what it visually sees. This approach is motivated by the way humans interact with interfaces without necessarily knowing the underlying code , which is interesting. The study reveals that such a model can efficiently operate in GUI tasks with generic mouse and keyboard actions, and even outperforms humans in specific benchmarks. For this they use PIX2STRUCT pre-trained on mapping screenshots to structured representation. Major Contributions: 1. Demonstrated that an agent with pixel-only inputs and generic actions can outperform humans on the MiniWob++ benchmark, achieving performance similar to top-tier agents with access to DOM information. 2. Adapted the WebShop benchmark to function with pixel-based observations and generic actions, establishing the initial baseline performance in this setting. 3. Highlighted the efficiency of PIX2STRUCT’s pre-training through screenshot parsing for GUI-based instruction with pixel-only inputs, leading to notable performance improvements on benchmarks like MiniWob++ and WebShop. 4. Successfully applied tree search for policy improvement in the MiniWob++ environment. Strengths: The paper studies a new approach to controlling GUIs using just pixel level information unlike alternate approach which uses DOM tree or HTML. While both has its pros and cons , this is one of the new papers ton explore the pixel based controls in this setting. Adaptable Benchmarking Capabilities: Authors successfully adapt and operate on the WebShop benchmark using solely pixel-based observations and broad actions, laying down a foundational performance standard. Robust Pre-training Mechanism: The strength of PIX2STRUCT’s screenshot parsing pre-training is evident. This pre-training strategy remarkably boosts the model's efficiency in GUI-based instruction tasks using pixel-based inputs, as observed in dramatic performance jumps on benchmarks. The authors have performed and demonstrated the importance of this through the ablation studies. This is very interesting. Effective Policy Enhancement with Tree Search: The model employs tree search effectively, which proves to be a straightforward yet potent method for enhancing its policy in environments like MiniWob++. A very good point that the paper has made - " Finally, aligning human demonstrations with task-dependent actions is often challenging." Weaknesses: The paper talks about generalization but slightly fails to highlight the same- for example : The chosen datasets ( miniwob++ and webshop(which is a great start) ) are relatively simple datasets and does not capture not capture the full complexity of real-world GUIs. Addtionally , to perform well reasonable well on the webshop dataset the model seems to require a fine tuning on miniwob++ which is still unclear as to why it is needed. Thus does it actually generalize to webshop? Furthermore, the authors perform hold out set evaluation on miniwob++. However, the tasks are "click-checkboxes-large, click-color, click-tab-2, click-tab-2-hard, count-shape, drag-shapes, use-color-wheel-2, use-slider-2" which could be similar to tasks in the miniwob++ benchmark train set , for instance "click-tab-2-hard" etc. It would be great to test on more diverse and complicated datasets. Over-reliance on Pre-training: While the pre-training from PIX2STRUCT seems beneficial, depending heavily on pre-trained models can sometimes limit the adaptability and flexibility of the system. Human Demonstrations: The model seems to be heavily reliant on human demonstrations for training. Gathering these demonstrations can be time-consuming and might not be feasible for all applications. Plus, the quality of these demonstrations can significantly affect the model's performance. Scalability: The approach described in the paper works well for small, simple web applications. It is not clear how the approach would scale to larger, more complex web applications. The limitation I see in the environment setup is the use of discrete bins for mouse coordinates and scroll amounts rather than allowing continuous values. A few potential issues with the discrete coordinate binning: It may make precise clicking and dragging actions more difficult if the bin sizes are too coarse. The agent would need to learn to chain multiple discrete drag actions to move long distances. The optimal binning resolution may vary across different interfaces and tasks. Finding the right granularity could require environment-specific tuning. Discretization can potentially lead to suboptimal policies compared to allowing continuous coordinates. Similarly for scroll amounts: Scrolling by discrete bins could be inefficient for long pages compared to direct scrolling by pixel amounts. The scroll bin size would again need tuning based on typical page lengths. Overall, the discrete coordinate and scrolling simplification may be necessary to limit the action space size. But it could negatively impact performance on some tasks compared to an environment with continuous values. No fine-tuning of visual features. The Vision Transformer weights are frozen, so the model may not learn visual representations best suited for this task. Minor comments: Despite the visual nature of GUIs, prior work has primarily focused on utilizing structured representations of the user interfaces (such as HTML sources, Document Object Model (DOM) trees, and Android view hierarchies) as well as custom, task-specific representations of high-level actions based on these structured representations. ---- > Missing citations. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Latency and Computation: Transforming screenshots into structured representations can be computationally intensive, which may lead to latency in real-time applications. So what is the typical time taken? Wanted to hear authors thoughts since , the system is based on interpreting pixel-level information from screenshots. This can be computationally intensive?? and may struggle with dynamic content that changes frequently, or with GUI elements that look visually similar but have different functionalities. Human Demonstrations: The model seems to be heavily reliant on human demonstrations for training. Gathering these demonstrations can be time-consuming and might not be feasible for all applications. Plus, the quality of these demonstrations can significantly affect the model's performance. Does authors have any thoughts one this? Scalability: How does PIX2ACT scale with different screen resolutions, GUI complexities, or dynamic content on the screens? The performance of models designed for specific benchmarks might not generalize well to real-world applications. Since even the original PIX2STRUCT paper talks about the impact of resolution. is there a scope for more sophisticated reward function, such as one that provides the agent with feedback about the progress of the task? also is the cumulative reward that accumulates in the miniwob++ benchmark based on the time taken to solve the task in one episode considered? since the agent receives a reward only at a terminal state. Sparse rewards can make learning more challenging, especially if the agent needs to take a long sequence of actions before receiving feedback? Cursor Representation: The system manually draws a cursor on the screenshot to indicate the mouse pointer position. This might not capture the full nuance of the cursor's state or type (e.g., a hand cursor vs. an arrow cursor), which could provide additional context to the agent. Greedy Action Selection: The agent follows a greedy policy by selecting the highest scoring action. Does this kind of approach can be short-sighted and might not always result in the optimal long-term strategy?? Beam Search Limitations: While beam search helps in narrowing down the set of most probable sequences, does it always yield the optimal sequence in practice. in the context longer sequences, where the algorithm might not explore sufficiently outside of its "beam" to find a better solution. It can sometimes prefer shorter sequences over more accurate longer ones?? Future work: An additional pre-training task that could potentially improve the model is predicting affordances of UI elements from screenshots. Affordances refer to the possible interactions that an element supports, like if a button can be clicked, a text box can be typed in, a menu can be opened, etc. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have mentioned the limitations ( It was thoughtful to anticipate about CAPTCHA) . However: Data Privacy: Since they are using screenshots to interpret and interact with GUIs, there might be concerns about data privacy, especially if sensitive information is displayed on the screen. This should be highlighted. I understand this would be issue while using DOM/HTML as well but highlighted might lead to responsible adoption. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! > Discrete bins for coordinates and scrolling We utilized discrete coordinate bins primarily for simplicity. We agree with the limitations you mentioned. While some prior work has also used coordinate bins (e.g. Humphreys et al. 2022), other work has used regression objectives or relative coordinate bins of varying precision (e.g. Baker et al. 2022, https://arxiv.org/abs/2206.11795), which may be useful to adapt for future work. We will add a discussion of this to the limitations section. > No fine-tuning of visual features We do in fact tune all parameters of the underlying model, including both the ViT encoder and the text decoder. We will make this clearer in the revised paper. > Citations in introduction Thank you, we will add citations or a forward reference to Section 6 where we discuss such approaches in detail. > Latency and Computation We did not focus on optimizing the latency of our model, but we measured the latency of processing a single screenshot and emitting an action to be 0.4s, when run on a single Google Cloud TPU v2. While our model is small (282M parameters) compared to some LLMs, further optimizations would likely be required to run in a real time setting at frame rate greater than 2 frames per second. Indeed, as we mention in section 4.1, to simplify our setting “capturing real-time observations for animated elements” is not supported, but could be interesting to consider for future work. > “GUI elements that look visually similar but have different functionalities” Conversely, elements with similar functionality may look visually similar but have different source code implementations across applications. We hope our work encourages further investigation of the pros and cons of representing web pages based on their source code vs. visual rendering. > Human Demonstrations While we are building off of and comparing with prior work that used human demonstrations, we agree that reducing the quantity of human demonstrations necessary to achieve strong performance in a new environment is a desirable goal for future work. > Scalability While the underlying Pix2Struct model supports variable-sized aspect ratios and has demonstrated success across a range of resolutions, we are unfortunately limited by the evaluations available to us. While MiniWob++ and WebShop collectively evaluate a range of capabilities, we hope increasing interest in the community to develop more realistic benchmarks will enable better studying this question in future work. For example, some benchmarks that have been released very recently (after the Neurips submission date) include Mind2Web (https://arxiv.org/abs/2306.06070) and WebArena (https://arxiv.org/abs/2307.13854). > Reward function and long trajectories For MiniWob, consistent with prior work, we do not use time-decayed rewards for evaluating and comparing different approaches. Additionally, the agent receives a reward only in the terminal state. We found that this can indeed cause challenges, e.g. for finding a correct trajectory using tree search for tasks that require longer trajectories. Perhaps related to your suggestion, for MCTS we trained a value network to estimate the future reward, based on a surrogate reward that encourages shorter trajectories. This provides information related to “progress on the task”, and Table 3 in Appendix B demonstrates that using this signal is useful. Appendix B also provides the relevant technical details. This approach to MCTS is not conceptually novel, but to the best of our knowledge this is its first application to the types of tasks we study. > Cursor representation We do in fact dynamically change the cursor rendering based on the cursor type according to the `cursor` CSS property of the currently hovered element. This can be seen in Figure 2 step 3, where the color selector element has the css property `cursor=crosshair`. That said, we didn’t find much evidence that this actually improves performance. > Greedy action selection Greedy action selection is indeed limited, especially when the underlying model is weaker. This can be seen by the difference between the greedy policy vs. tree search policy (Table 2). However, at test time, we assume the agent cannot revise previously chosen actions so report the greedy policy results for consistency with prior work. > Beam search limitations Beam search is used in the text decoder to determine the top-k actions for a given step. Most actions have a similar token length (example action strings are shown in Figure 1). Additionally, following T5 and Pix2Struct, we use length normalization (briefly mentioned in Appendix B.1) to attempt to offset beam search’s bias towards shorter sequences, which we will clarify in the main text. We also had initial experiments exploring beam search as an alternative to MCTS for identifying high-reward sequences of actions, but it did not perform as well as MCTS, especially for tasks requiring longer trajectories. > Future work and pre-training We agree with your proposed direction for future work. This type of affordance information should be abundant on the web and provide a potentially useful pre-training signal. > Data privacy Thank you for raising this potential concern. We will incorporate this into the revised paper. --- Rebuttal Comment 1.1: Title: Acknowledging Authors rebuttal Comment: I have reviewed the authors' rebuttal and acknowledge the points they have addressed. I appreciate their efforts in clarifying my concerns. I eagerly await the revised version to see the implemented changes. Great work thus far.
Summary: This work explores the possibility of building an agent that can complete tasks for users solely based on pixel-level visual representations of the GUI state and generic low-level actions, without relying on structured or task-specific representations. The authors demonstrate the effectiveness of their approach on two benchmarks, MiniWob++ and WebShop, adapted to their general Chrome-based environment framework. Strengths: This submission demonstrates significant strength in several key areas. Firstly, it presents a clear and compelling motivation for developing an agent capable of directly interacting with pixel-level GUI interfaces, aligning with a more natural and human-like manner. This aspect alone distinguishes the submission and makes it particularly appealing to the readers. Moreover, the use of well-crafted illustrations enhances the understanding of the content and effectively communicates the main ideas, further solidifying its strength. Readers can easily grasp the concepts presented, adding to the submission's overall impact. Furthermore, the proposed framework is rigorously validated on two datasets, and the comparison with baselines using DOM/HTML as input is conducted in great detail. This thorough analysis strengthens the submission's credibility and highlights its robustness. Weaknesses: I have several concerns regarding the experimental design in the submission: (1) One notable issue is the absence of certain ablations that would provide valuable insights into the effectiveness of each component in the proposed agent framework. Specifically, the authors should address the bottleneck of the framework, whether it lies in the visual encoder or the text decoder. Additionally, it would be beneficial to explore the potential performance improvements by utilizing alternative variants of visual encoders or text decoders. Furthermore, since the visual encoder plays a crucial role in extracting and understanding instructions embedded in the interface, it is essential to verify the effectiveness of using ViT in OCR accuracy. (2) The rationale behind embedding instructions into the UI screenshot instead of providing them directly as input remains unclear. The submission should elaborate on this choice and justify its benefits. Moreover, considering the availability of open-source Visual Language Models (VLMs) like mPLUG-Owl [1], OpenFlamingo [2], and Otter [3], which natively support instruction following and accept multimodal input, it would be valuable to include these VLMs as baselines in the revised version. This addition would offer a more comprehensive evaluation and better contextualize the proposed method's unique contribution in comparison to existing approaches. [1] https://github.com/X-PLUG/mPLUG-Owl [2] https://github.com/mlfoundations/open_flamingo [3] https://github.com/Luodian/Otter Addressing these concerns would significantly enhance the experimental design and strengthen the submission's overall validity and contribution to the field. Technical Quality: 3 good Clarity: 3 good Questions for Authors: (1) In Figure 2, step 3, why are there two overlapping ‘+’? Does it mean the mouse is pressed down? (2) This is not a serious issue, but some Figures are quite blurry (e.g., MiniWob++ examples in Figure1 and top row in Figure 2), is this due to the way the dataset is constructed? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have included discussions of the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! > Bottleneck is vision encoder or text decoder? The text decoder is only responsible for decoding short strings corresponding to the closed set of actions shown in Figure 1. Therefore, it seems reasonable to assume that the ViT encoder is the “bottleneck” towards achieving strong performance. As discussed in the introduction, many of the key challenges are essentially representation learning challenges for the encoder, such as understanding the interface layout, recognizing and interpreting visually-situated natural language, identifying visual elements, and predicting their functions and methods of interaction. Our ablations highlight the screenshot parsing pre-training task of Lee et al. 2023 as a critical factor towards improving model performance (Figure 3). > Architectural variations of Vision Transformers Pix2Act builds off of the Pix2Struct model of Lee et al. 2023, which uses a standard ViT architecture. We believe their strong results on tasks related to UI understanding (e.g. RefExp, Screen2Words) support the choice of this model as a starting point for pushing the boundaries of a pixel-based digital agent. > “The rationale behind embedding instructions into the UI screenshot instead of providing them directly as input remains unclear.” While we agree this method for representing instructions can perhaps seem unorthodox at first, prior work (e.g. Lee et al. 2023) has used this method for incorporating instructions in the input of pixel-only models for tasks such as DocVQA. This was hypothesized to enable better transfer from the pre-training task (see section 2.5 of Lee et al. 2023), and validated by strong empirical results. We will add this justification to the paper. We also note that while this is the most natural way to incorporate instructions when starting from Pix2Struct, providing the instructions as text may work better for other pre-trained models that are tailored to accepting a text instruction as input. > ​​Baselines for “Visual Language Models (VLMs) like mPLUG-Owl, OpenFlamingo, and Otter” We are excited by the increasing public availability of larger and more capable VLM models, and we agree evaluating such models on our proposed setting would be interesting for future work (we include some related discussion in section 7). However, we note that the Neurips 2023 submission deadline was May 17, 2023. To the best of our knowledge, the mPLUG-Owl preprint was released on April 27, 2023. OpenFlamingo was announced on March 28, 2023 and a paper is not yet available. The Otter preprint was released on June 8, 2023. Additionally, compared to Pix2Struct, VLM models such as mPLUG-Owl, OpenFlamingo, or Otter have not demonstrated as strong a capability for learning representations of inputs that contain visually-situated language such as UI screenshots, as measured on tasks such as RefExp, Screen2Words, etc. Indeed, one of the limitations of mPLUG-Owl is “complex OCR” and the ability to correctly interpret web page screenshots (Figure 12 of mPLUG-Owl paper, ​​https://arxiv.org/abs/2304.14178). In contrast to these publicly available VLMs, GPT-4 has demonstrated strong results on tasks involving visually-situated language, outperforming Pix2Struct on ChartQA and AI2D (https://openai.com/research/gpt-4), but as far as we know support for tuning this model on multimodal inputs is not publicly available, and the details of the model have not been published. Regardless, our ablations (Figure 3) suggest that pre-training on the screenshot parsing task of Lee et al. 2023 is critical to achieve strong performance on the tasks that we study, suggesting that this task or a similar one should be included in the pre-training of VLMs that are used to develop digital agents. > Figures are quite blurry We will try to improve the rendering quality, but are limited by the resolution of the MiniWob environment, which is 160 by 210 pixels. > Figure 2 step 3 The color selection element renders a “+” at the currently selected color. When the pointer is positioned over this element, it is also rendered as a “+”, as this element has the css property “cursor=crosshair”. At step 3 of Figure 2, the pointer is positioned over the currently selected color, leading to two similar “+”s that are slightly offset. --- Rebuttal Comment 1.1: Comment: Thanks for providing the rebuttal response. I acknowledge that I have read the response.
null
null
null
null
Undirected Probabilistic Model for Tensor Decomposition
Accept (poster)
Summary: The authors propose a new Probabilistic Tensor Decomposition (TD) method. By modeling the joint probability of the data and latent tensor factors using an Energy Based approach, they make no (possibly restrictive) structural and distributional assumptions on the generative process linking latent and observations. The model is trained using an upper bound of the Conditional Noise-Contrastive Estimation Loss. It is then evaluated on simulations using non Gaussian generative models and on tensor completion task on real world datasets. Strengths: The possibility to bypass the specification of both structural and distributional dependencies is very interesting. It has many applications beyond the tensor completion tasks, for example when analyzing datasets with complicated dependencies on significantly simpler latent variables. The simulated datasets used in the paper are simple but illustrate this point well. Results on real world tensor completion tasks are convincing. Weaknesses: Major Points: 1) The code is not provided. 2) In the introduction, it would be worth motivating the use of probabilistic methods over traditional ones. 3) I did not find the Energy Based Model section (2.2) very well written. In particular, it is not clear how to choose the conditional $p_c(y|x)$ to improve efficiency. 4) The method is only evaluated on tensor completion tasks. Do the inferred factors have any explanatory value ? How robust is the decomposition ? 5) There is no discussion on the compute time of the proposed method. Minor Points: 1) l.43 It is worth mentioning that Generalized CP decomposition exist but that most of them do not allow a probabilistic treatment of the data. Moreover, beyond Gaussian and Binomial, tractable decomposition have been developed for Negative Binomial Distributions. 2) l.93 I think the authors implicitly assume $R_d = R$ for all $d$ without explicitly saying it. 3) Experimental details in the main paper and supplementary are not consistent (experiment 1: main paper $R=5$, supplementary $R=3$). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1) The paper only discusses 2 and 3-way tensors. How does the method scale with higher order tensors ? 2) Beside Tensor completion, an interesting aspect of TD methods is often to explain a dataset $\mathcal{X}$ using simpler yet unobserved factors $\mathcal{Z}$. How robust are the inferred factors depending on the initialization of the model ? Can you measure it using e.g. a similarity metric and compare it to existing methods ? 3) The authors argue that the use of overly simple generative model $p(x|z)$ can bias the TD decomposition. They provide convincing illustration on their toydataset but it would be interesting to better understand why it is the case on the tested the real world dataset. 4) The approach developed by the authors seems extremely flexible. Yet, by allowing almost arbitrary complex link between the latent and the observation, one might fear that the inferred latent distribution is "non-unique", which might hinder the interpretation of the discovered factors. Can you say more about identifiability ? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: There is no discussion on the limitation of the method when the posterior $p(z|x)$ itself is non Gaussian. The approximation $q(m; \phi)$ is set to be multivariate normal with diagonal covariance, and, so, it cannot model, for example, multi-modal distribution on $z$. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## W1: Code availability. We send an anonymous link to the AC via the official comment. ## W2: Motivation of using probabilistic methods over traditional ones. There are several advantages of using probabilistic models, such as, 1. Using probabilistic models, we can deal with different data types. For examples, by adopting different distributions, it provides a principled way of designing proper loss functions. 2. Probabilistic models can give uncertainty estimates about the observations and latent factors. 3. Bayesian models have other potential benefits, such as continual learning, less likely to over-fit and so on. We will add more discussions about this part in the introduction. ## W3: How to choose conditional noises to improve efficiency. The basic idea of NCE is to identify the noises from the data points, which shares some similarities with density ratio estimation such as in GAN. Unfortunately, there is no theoretical guidance about how to choose noise distributions yet. A common belief is that the noises should be similar to data. This is the motivation of using data-dependent noises, rather than the same noise for each data point. We will rearrange the presentation to make it more clear. ## W4: Explanatory value of factors? How robust is the decomposition? Currently, we do not focus on learning explanatory factors. To learn such factors, one may need to add additional constraints on the tensor decomposition, e.g., identifiability of the factors. This is generally hard for non-linear models using neural networks. However, we conduct some simple investigation on the *Alog* dataset following the setting of [1]. In specific, the tensor latent factors are firstly mapped to the two-dimensional plane through PCA and then the $k$-means clustering results are plotted for each data point. The figure can be found in the attached PDF. We find hidden patterns similar to Figure 4 in [1]. Moreover, we investigate the results for different ranks and initializations, showing that this pattern is not due to random effects. Although we initially did not consider this problem, our model seems to have better clustering results compared with other baselines. This indicates the potential of our model to discover explanatory latent factors. Currently, we do not conduct experiments about robustness. It would be a very interesting direction since our model is capable of handling different data distributions. Moreover, the adversarial robustness would also be an interesting direction to investigate. [1]. Tillinghast, C., Wang, Z., & Zhe, S. (2022). Nonparametric sparse tensor factorization with hierarchical Gamma processes. In ICML. ## W5: Compute time. Due to the space limit, please check our response to *W1* raised by **Reviewer Duah*. ## Minor1: Tractable tensor decomposition for other distributions We agree that there are many tractable TDs for other distributions. We will add several references about this point, such as [1-3]. [1]. Schein, A., Zhou, M., Blei, D. & Wallach, H.. (2016). Bayesian Poisson Tucker Decomposition for Learning the Structure of International Relations. In ICML. [2]. Hong, D., Kolda, T. G., & Duersch, J. A. (2020). Generalized canonical polyadic tensor decomposition. SIAM Review. [3]. Soulat, H., Keshavarzi, S., Margrie, T., & Sahani, M. (2021). Probabilistic tensor decomposition of neural population spiking activity. In NeurIPS. ## Minor2: Assume $R_d = R$. We will highlight this assumption. ## Minor3: Inconsistent setting of rank. We checked experimental records and the rank should be 3. We will fix it. ## Q1: Scalability with higher order tensors In principle, both the time and space complexity of our model is linear with the tensor order $D$. Given the same number of observations $N$, our model is scalable with the tensor order $D$. ## Q2: Robustness of the learning process of latent factors. Generally, we think it is hard to learn identifiable latent factors for non-linear models, especially using neural networks. However, we do find robustness of these latent factors in some sense. For example, in our response to *W4*, we plot the latent factors for different ranks and different initializations. They show similar patterns in the clustering. This would be an interesting direction to study further, including new metrics to evaluate these similarities. ## Q3: Different distributions in real applications. Demonstrating this phenomenon is hard in real applications, since we do not know the underlying real distributions. However, we can find some evidence from current results. In Table 1 of the manuscript and the additional results in our response to **Reviewer buha**, we observe that the improvements in MAE are notably more significant. One possible reason is that other baselines are trained implicitly by minimizing the square loss as they adopt Gaussian assumptions about the data. However, our model does not make such assumptions about the data and the loss function. Hence, it can adapt to the data distribution more flexibly. Additionally, in Table 2 of manuscript, while simply using NN to model the trajectory is hard, as shown by NNDTL and CTNN, our model outperforms them and NONFAT, which is the SOTA in the field. This may indicate that our model efficiently learns how the data distribution shifts with time. ## Q4: Identifiability about the tensor factors. Unfortunately, we think the model cannot learn identifiable latent factors currently, since we are using non-linear neural networks. However, as our response to *W4*, our model possibly learns some sharing latent patterns under different settings and different initializations. ## Limitation: Non-Gaussian posteriors. We agree with the reviewer about this limitation. However, due to the computational efficiency, mean-field Gaussian assumption is widely used in variational inference. It could be a potential direction to improve our model, by using more expressive distributions. --- Rebuttal Comment 1.1: Comment: Thank for your response. Although I appreciate the effort of answering my concerns/questions, the results provided for W4 - Q2 are not satisfying. The authors should either (i) carefully evaluate the robustness of their model or (ii) clearly state that interpretability and robustness of the discovered factors is beyond the scope of the study, which only evaluates the method for Sparse Tensor Completion (and which is fine...) In its current state: (i) The Alog dataset is not described. (ii) The authors mentioned "different ranks and initializations" but only 1 seed per rank is plotted on the attached .pdf. (iii) No interpretation on the discovered cluster is provided (iv) The author do not report any robustness metric (see for example [1]) and instead use vague statements like "our model seems to have better clustering results compared with other baselines". [1] Williams et al., 2018, Neuron 98 --- Reply to Comment 1.1.1: Comment: Thanks very much for your comments and providing the reference. We apologize for any ambiguity raised by the previous response. We should clarify that in this work we do not aim to study the explanatory value of the tensor factors nor the robustness of the decomposition. And we will not make such claims without further theoretical or experimental results. The purpose of the previous response is to show some potential evidence for future research, which we think would be interesting. Regarding the questions (i). The Alog dataset is not described. >The Alog dataset is an order-3 tensor, extracted from an access log of a file management system. Each dimension represents *user*, *action*, *resource*. We included the above descriptions in the manuscript. This dataset was initially processed by [1], and has been widely used to evaluate completion performances, such as in [2-4], which are also our baseline models. However, we can hardly find the original data and detailed description about each dimension, since only processed data were provided. For example, mode-2 represents 100 actions, but we do not know what these actions are. So we were not able to explain the clustering results. The clustering results were presented following [4] to show some preliminary evidence. >- [1]. Zhe, S., et al. Scalable nonparametric multiway data analysis. AISTATS (2015). >- [2]. Zhe, S., et al. Distributed flexible nonlinear tensor factorization. NeurIPS (2016). >- [3]. Tillinghast, C., et al. Probabilistic neural-kernel tensor decomposition. ICDM (2020). >- [4]. Tillinghast, C., et al. Nonparametric sparse tensor factorization with hierarchical Gamma processes. ICML (2022). (ii). The authors mentioned "different ranks and initializations" but only 1 seed per rank is plotted on the attached .pdf >We are sorry about the ambiguity raised by this sentence. Here, “different ranks and initializations” means that the models with different ranks were initialized differently from each other. (iii). No interpretation on the discovered cluster is provided >As we described in the answer to question (i), since this dataset is mainly used to evaluate completion performances and the original data resources were not released, we were not able to give detailed interpretation about the clustering results, for example, the exact meaning of each cluster. (iv). The author do not report any robustness metric (see for example [1]) and instead use vague statements like "our model seems to have better clustering results compared with other baselines". >Thanks for providing the reference. Currently, we do not conduct numerical evaluation about the robustness of the decomposition nor the interpretation of the tensor factors. However, since we focus on tensor completion, the prediction results of those missing entries are evaluated under different initializations and data folds. Maybe datasets from neural science etc. are more suitable for explaining the tensor factors, where we can use domain knowledge for explanation. We will leave this part for future research. If there are any unclear issues in our response or if you have any further questions, we are more than happy to answer them.
Summary: This paper proposes a probabilistic tensor decomposition model called EnergyTD that integrates a deep energy-based model (EBM) in tensor decomposition. The EnergyTD does not model the values in a tensor as the conditional probability conditioned on the latent factors based on the predefined models but models them as the joint probability of the data and the latent factors. It provides a learning algorithm that extends the CNCE loss using the variational approach. The experimental results show that EnergyTD outperforms conventional methods in the tensor decomposition tasks assuming diverse distributions including continuous-time tensors, tensor completion, and continuous-time tensor completion. Strengths: This paper proposes a probabilistic tensor decomposition model that integrates EBM in tensor decomposition along with a theoretical guarantee. The experimental results show that the proposed method outperforms conventional methods in the tensor decomposition tasks assuming diverse distributions. This paper is well-organized and easy to follow. Weaknesses: The time/space complexity of the proposed method is only poorly discussed, and there is no evaluation regarding the computational cost. This is one of the major concerns in the tensor decomposition research community. A minor problem: The "difficultly" in line 109 should be "difficulty". Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: What is the time/space complexity of the proposed method? How is the proposed method computationally effective compared to the conventional methods? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: As the authors describe in the Conclusion, there is room to improve theoretical analysis. Moreover, the architecture of the network structure should be discussed more in the future. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## W1: Time/space complexity. The time complexity of training should be $\mathcal{O}(\nu B ( D R H + L H^2))$, where $B$ is the batch size, $\nu$ is the number of conditional noises, $H$ is the number of hidden units per layer, $L$ is the number of layers and $D$ is the tensor order. The space complexity should be $\mathcal{O}(D R + LH^2)$ to store tensor factors and NN parameters. The space complexity is linear to the tensor order and rank, which is similar to many classical TDs such as CP and tensor train decomposition. We additionally use an NN. However, the network is small, e.g., several MLP layers with hundreds of hidden units each. The time complexity is $\nu$ times more than traditional ones, since we need to compute forward passes for $\nu$ particles. However, since these forward passes can be computed in parallel, we believe the computational speed can be easily improved using parallel programming libraries such as `JAX` and `functorch`. Moreover, unlike many traditional TDs performing ALS updates on the whole datasets, we use stochastic optimization, which is highly scalable to large datasets. To illustrate the computing time, we post an example of training the *Air* dataset, with rank 5. We compare with CTGP, NNDTN, NONFAT, which are better than other baselines, and THIS-ODE suggested by **Reviewer buha**. All experiments are conducted on a single RTX A5000 GPU. For more details, please check our response to **Reviewer buha**. | | CTGP | NNDTN | NONFAT | THIS-ODE | EnergyTD | | ----------------------- | --------------- | --------------- | --------------- | --------------- | --------------- | | Time/Epoch (in seconds) | 1.17 $\pm$ 0.30 | 2.18 $\pm$ 0.04 | 2.51 $\pm$ 0.13 | 464. $\pm$ 131. | 5.30 $\pm$ 0.37 | Our model is slower than NONFAT, but much faster than THIS-ODE. While NONFAT is faster for each epoch, it converges much slower than our model. As suggested by the paper of NONFAT, we run 10,000 epochs to get a good performance. For our model, only 200 epochs is sufficient to converge. ## W2: Typo Thanks for pointing out the typo. We will fix it. ## Q1: What is the time/space complexity? Please see our response for *W1*. --- Rebuttal Comment 1.1: Comment: Dear Reviewer Duah, Could we kindly know if the responses have addressed your concerns and if further explanations or clarifications are needed? Your time and efforts in evaluating our work is much appreciated. --- Rebuttal Comment 1.2: Title: Thank you for your rebuttal! Comment: Thank you very much for your answer to my question! I am satisfied with the discussion on computational complexity and I change my rating. I sincerely apologize for my late response. --- Reply to Comment 1.2.1: Comment: We are glad that we could address your concerns. Thank you for reviewing our work!
Summary: This paper uses the Energy Based Model (EBM) framework to capture the joint probability of the data and latent tensor factors to learn as much information from data as possible, which discards the structural and distributional assumptions and thus avoid picking an inappropriate TD model. To further flexibly learn the unnormalized probability function, the authors derive a variational upper bound of the conditional noise-contrastive estimation objective. The main contribution lies in inserting the EBM framework into modeling probabilistic Tensor Decompositions. Strengths: Pros: 1. This paper first introduces undirect graphic models into modeling probabilistic tensor decomposition. In detail, the authors consider latent factor matrics as latent factors in the Variational Noise Contrastive Estimation framework, leading to flexible modeling of joint distribution over observations. To the best of my knowledge, this is interesting and novel to the Tensor Decomposition community. 2. This paper is well-written and the technological parts are sound and valid. Weaknesses: Cons: 1. If what I understand is correct, one important goal of this paper is to find the $\theta$ and $p_{\theta}(x)$. The authors use the framework of variational noise contrastive estimation to obtain the unnormalized function $\phi(x;\theta)$. However, it seems to be easier to directly model $\phi(x;\theta)$ and thus minimize $\mathcal{L}_{CNCE}(\theta)$. Furthermore, it seems that these $\textbf{m}$s are unnecessary and can have arbitrary dimensions. 2. Most technical contributions are in the form of a combination of existing theory and framework. This actually decreases this paper's contributions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Questions: 1. As mentioned in the last part, I am confused about why not directly model $\phi(x;\theta)$ and $p_{\theta}(x)$. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes, the authors have adequately addressed the limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## W1: Directly modeling $\phi(x; \theta)$ and the choice of $m$. Thanks for the question. Apart from the data distribution $\phi(x; \theta)$, in tensor decompositions, we aim to infer the latent factors, i.e., $z$ in the manuscript. If we directly model $\phi(x; \theta)$, we cannot obtain the information of these latent factors. Therefore, we need to model the joint energy function $\phi(x, m; \theta)$, where $m$ consists of $z$. Equipped with this joint energy function, we can sample posterior of the latent factors from $p(z \mid x)$. However, the data energy function $\phi(x; \theta) = \int \phi(x, m; \theta) dm$ becomes intractable, since we adopt non-linear mappings. This is one main difficulty in the learning process. Variational inference (VI) has become one of the most popular tools to deal with such intractable integrations of latent variables. By using VI, we finally obtain the lower bound of the CNCE loss in Eq. (5), which is tractable to compute. $m$ actually consists of latent tensor factors (as described in Line 132) and the size of $m$ is $D R$, where $D$ is the tensor order and $R$ is the tensor rank. The rank serves as a trade-off between model complexity and expressive power, which is usually treated as a hyper-parameter. In tensor decomposition, these tensor factors are crucial to learn latent correlations of the high-order data. Taking tensor completion applications as an example, once we learn these tensor factors, we are able to predict missing entries given very sparsely observed data. ## W2: Contribution. Our main contribution is to establish a new tensor decomposition model. As **Reviewer buha** suggested, there is no previous literature trying to use EBMs to model tensor data. Our model shows the potential to deal with diverse types of applications in tensor decompositions. Although both tensor decompositions and EBMs are existing models, it is generally hard to learn EBMs for such high dimensional data in the presence of latent factors. To enable efficient learning processes, we establish the variant of CNCE loss, as well as ad hoc network architectures, that finally result in a powerful TD model. ## Q1: Why not directly model $\phi(x; \theta)$ and $p_\theta(x)$. Please see our response to *W1*. --- Rebuttal Comment 1.1: Comment: Dear Reviewer 32MU, Could we kindly know if the responses have addressed your concerns and if further explanations or clarifications are needed? Your time and efforts in evaluating our work is much appreciated.
Summary: The paper proposes an innovative approach for non-linear tensor decomposition. It utilizes the deep energy-based model (EBM) to model the joint energy function of tensor observations and latent factors. This design enables a more flexible decomposition without the need for explicitly defining the interaction between the latent factor and the commonly used Gaussian prior. Furthermore, the expressive nature of the energy function allows for the modeling of side information, such as timestamps. Strengths: The paper presents a clear motivation and introduces innovative and sophisticated solutions. To my knowledge, this is the first work to adopt Evidence-Based Medicine (EBM) for tensor decomposition without the need for an explicit design of the decomposition form. Furthermore, the adaptability of the proposed method for dynamic tensors is impressive. The experimental setup is robust and comprehensive, demonstrating the thoroughness of the research. Weaknesses: 1. Instead of using NON-FAT or BCTT for experiments on dynamic tensors' decomposition and visualization, it would be more appropriate to choose THIS-ODE[1] as the baseline. While NON-FAT and BCTT are designed for modeling temporal dynamics of factors in a tensor, THIS-ODE aligns with the proposed model by having an entry-wise dynamic design. 2. It appears that the objective function involves entry-wise importance sampling, and completing the posterior will require another round of entry-wise sampling. I am unsure if this will lead to training and inference costs becoming problematic, especially in scenarios with a large number of entries. It would be beneficial to have a more in-depth discussion about the scalability aspect. [1] Decomposing Temporal High-Order Interactions via Latent ODEs, Shibo Li, Robert M. Kirby, and Shandian Zhe, The 39th International Conference on Machine Learning, 2022 Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weakness Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## W1: Comparison with THIS-ODE. Thanks to the reviewer for pointing out this reference, which is very relevant to our model. We would like to add some discussions about it in the manuscript and compare it with our model in experiments. However, its official implementation runs very slow and we are not able to complete very sufficient experiments due to the time limit. Here is a brief comparison about the computational time of several models, running on a single RTX A5000 GPU. Apart from THIS-ODE, we also list the computational time of CTGP, NNDTN and NONFAT, which perform better than other baselines. We test on the *Air* dataset, setting batch size to 128 and tensor rank to 5, and report the running time of one epoch by averaging on 10 runs. For our model, we set $\nu = 20$. For NONFAT and THIS-ODE, the default settings are used. | | CTGP | NNDTN | NONFAT | THIS-ODE | EnergyTD | | ----------------------- | --------------- | --------------- | --------------- | --------------- | --------------- | | Time/Epoch (in seconds) | 1.17 $\pm$ 0.30 | 2.18 $\pm$ 0.04 | 2.51 $\pm$ 0.13 | 464. $\pm$ 131. | 5.30 $\pm$ 0.37 | As we can see, our model is slower than NONFAT, but much faster than THIS-ODE. While NONFAT is faster for each epoch, it converges much slower than our model. As suggested by the paper of NONFAT, we run 10,000 epochs in order to get a good performance. For our model, only 400 epochs is sufficient to converge. For THIS-ODE, it takes more than 30 hours to run 100 epochs on the *Air* dataset, and more than 40 hours to run 50 epochs on the *Click* dataset. Due to the time limit, we do not run more epochs. Moreover, the learning process shows that the model almost converges at this point, so the results are also convincing. The preliminary results are listed below and we will definitely conduct more sufficient experiments later. Our model outperforms THIS-ODE with much lower computational cost. | | RMSE | | | | MAE | | | | | -------- | ----------------- | ----------------- | ------------------ | ----------------- | ----------------- | ----------------- | ----------------- | ----------------- | | *Air* | Rank 3 | Rank 5 | Rank 8 | Rank 10 | Rank 3 | Rank 5 | Rank 8 | Rank 10 | | THIS-ODE | 0.588 $\pm$ 0.003 | 0.612 $\pm$ 0.000 | 0.586 $\pm$ 0.000 | 0.578 $\pm$ 0.000 | 0.434 $\pm$ 0.002 | 0.451 $\pm$ 0.000 | 0.426 $\pm$ 0.000 | 0.424 $\pm$ 0.000 | | EnergyTD | 0.302 $\pm$ 0.008 | 0.291 $\pm$ 0.006 | 0.300 $\pm$ 0.012 | 0.283 $\pm$ 0.004 | 0.184 $\pm$ 0.006 | 0.177 $\pm$ 0.003 | 0.172 $\pm$ 0.006 | 0.184 $\pm$ 0.003 | | *Click* | | | | | | | | | | THIS-ODE | 1.408 $\pm$ 0.009 | 1.409 $\pm$ 0.008 | 1.405 $\pm$ 0.005. | 1.405 $\pm$ 0.008 | 0.846 $\pm$ 0.009 | 0.835 $\pm$ 0.007 | 0.846 $\pm$ 0.002 | 0.843 $\pm$ 0.007 | | EnergyTD | 1.396 $\pm$ 0.003 | 1.385 $\pm$ 0.003 | 1.356 $\pm$ 0.001 | 1.357 $\pm$ 0.001 | 0.777 $\pm$ 0.003 | 0.775 $\pm$ 0.003 | 0.772 $\pm$ 0.002 | 0.773 $\pm$ 0.001 | Currently, we have not got very faithful trajectory estimates using the simulation data with THIS-ODE ## W2: Training and inference cost. Firstly, for the training, the computational cost mainly comes from the evaluation of the energy function. In particular, we need to evaluate the energy function for $\nu$ times, where $\nu$ is the number of conditional noises and is typically 20 in our experiments. The time complexity is $\mathcal{O}(\nu B ( D R H + L H^2))$, where $B$ is the batch size, $\nu$ is the number of conditional noises, $H$ is the number of hidden units per layer, $L$ is the number of layers and $D$ is the tensor order. Since we are using very small NNs, e.g., several MLP layers with hundreds of hidden units each, the computational cost is not large. From the table above, we can see that our model is slower than NONFAT, but much faster than THIS-ODE. More importantly, since the forward passes for each particle can be computed in parallel, we believe the computational speed can be easily improved using parallel programming libraries such as `JAX` and `functorch`. Then, for the inference, apart from traditional MCMC samplers, we can also perform a very effective grid search to find MAP estimates in parallel. This can be much faster than the THIS-ODE model, which uses time-consuming diffusion processes for training and inference. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response! I still strongly support this work. --- Reply to Comment 1.1.1: Comment: Thanks for your positive feedbacks! We will improve the manuscript following the suggestions.
Rebuttal 1: Rebuttal: We thank all the reviewers for their constructive comments and suggestions. Below we respond to them respectively. Pdf: /pdf/dc05a80dd475f1ad9adb270c51adf05e9e88c605.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Borda Regret Minimization for Generalized Linear Dueling Bandits
Reject
Summary: The authors consider the linear dueling bandit problem, where the regret is measured in terms of Borda regret. The Borda regret function was adopted as that used in Saha et al. 2021a. Different from conventional bandit settings, there is a mismatch between the reward function and the regret function definition in the sense that the expected reward of pulling a pair of arms is p_i,j, while the regret is defined based on the function B(i)+B(j), where B(i) is the Borda score of i-th arm. The authors provide a lower bound on the regret by the construction of hard cases and a matching algorithm based on explore-then-commit. The adversarial setting is also considered, where the EXP3 style algorithm is proposed and analyzed, and shown to match the upper bound. Strengths: + For both settings, the proposed algorithms are shown to match the upper bounds. + The construction of hard cases is interesting. Weaknesses: - The experiment results are not very convincing. - The algorithms are relatively straightforward extensions of the existing approaches - The constructed hard case is also a relatively straightforward extension of that in Saha et al. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - The setup for experiments is confusing. The two algorithms are for single context and adversarial settings, respectively. It is not clear why they can be used in the same setting for evaluation. line 8 of algorithm 1: "s" should be "t" Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: not applicable Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We address your comments and questions as follows: --- **Q1**: The experiment results are not very convincing. The setup for experiments is confusing. The two algorithms are for single context and adversarial settings, respectively. It is not clear why they can be used in the same setting for evaluation. \ **A1**: We are sorry if the description of the experiments is not clear enough. Here we will explain the two experimental settings respectively: \ + The synthetic setting is in the linear stationary setting. Since BEXP3 can work for the linear adversarial setting (which admits the linear stationary setting as a special case), it also works for our linear stationary setting. We conducted this experiment as a sanity check for our theory. We would also like to mention that our hard instance construction (which the experiment is based on) can apply to both the stationary and the adversarial setting. + The real-world dataset is a non-linear stationary setting. BETC can work under this setting. BEXP3 is not designed for non-linear link function, but we still test whether it can work even under the misspecified situation, as explained in Line 338. Based on your feedback, we conducted an additional experiment to examine the performance of BEXP3. In this experiment, the number of items is $K = 64$, and the feature dimension is $d = 5$. The environment is adversarial and will alter its parameter $\theta^*_t$ (defined in Sec. 3) every 100 steps to make the algorithm’s chosen arms have the worst Borda score, introducing the largest one-step Borda regret. More specifically, we set $\theta^*\_t = \arg \min\_{\theta} B\_{\theta}(i_t)+B\_{\theta}(j_t)$. The whole simulation takes 100,000 steps, and we report the average cumulative regret over 100 runs with shaded areas near each line indicating the standard deviation. From the figure we uploaded, we can see that a non-adversarial algorithm (BETC-GLM) quickly suffers from a linear regret in the commit phase because the adversarial environment makes the committed arm the worst. DEXP3 and BEXP3 both can adapt to the adversarial environment, but our algorithm BEXP3 can also take advantage of the linear features, and thus incurs a smaller regret than DEXP3. Please see the figure we provided in the uploaded PDF file. --- **Q2**: The algorithms are relatively straightforward extensions of the existing approaches \ **A2**: Since the proposed setting has never been studied before, our main goal is to propose efficient algorithms that can optimally solve the new problem, rather than propose completely different algorithms from any previous one. In addition, it suffices to adopt some standard techniques such as explore-then-commit (ETC) or EXP3 in our algorithm design. Building upon standard techniques makes the algorithm easy to understand, implement, and analyze, which we consider as an advantage rather than a disadvantage. --- **Q3**: The constructed hard case is also a relatively straightforward extension of that in Saha et al. \ **A3**: We agree that our construction shares a similar high-level structure as in [Saha et al 21]. However, since the construction and the proof are for the contextual setting, it is quite different from the multi-armed setting. \ In detail, our construction is based on the hardness of identifying the best arm in the $d$-dimensional linear bandit setting, which is different from [Saha et al 21]. Besides, we used a different proof technique. [Saha et al 21]’s proof is directly based on hypothesis testing: either identifying the best arm with gap $\epsilon$ within $T$ rounds (if $T > \frac{K}{1440 \epsilon^3}$) or incurring $\epsilon T$ regret (if $T \le \frac{K}{1440 \epsilon^3}$). In contrast, our proof technique bounds from below the regret by the expected number of sub-optimal arm pulls, and does not divide the problem instances into two cases (whether $T≤\frac{K}{1440 \epsilon^3}$). --- Rebuttal Comment 1.1: Comment: Thanks for the explanation and additional experiment. The additional experiment helps. For the other two points, my feeling is still mixed. Yes, indeed simpler algorithms and bounds are good, but if the techniques are too similar to known ones, there is a concern about whether the new problem is too incremental. I am keeping the rating in this case.
Summary: This paper studies the problem of minimising Borda regret for dueling bandits in the generalised linear setting where each pair of arms has a context vector associated with it. The paper considers both the stochastic setting where the parameter $\theta^*$ used for generating rewards is fixed and the adversarial setting where it is time dependent. **[Lower bound]** The authors show a hard problem instance for which any algorithm will incur a regret of $\Omega(d^{2/3}T^{2/3})$ in the stochastic setting, $d$ being the dimension of the context vectors. **[BETC-GLM for stochastic setting]** The authors propose an explore-then-commit style algorithm that first pulls arm pairs based on a G-optimal design for some rounds and then exploits by selecting the arm $\hat{i}$ with the highest MLE Borda score and pulling $(\hat{i}, \hat{i})$. This algorithm incurs $\tilde{O}(\kappa^{-1} d^{2/3} T^{2/3})$ regret, where $\kappa$ is a problem dependent parameter (which is constant for linear bandits). **[BEXP3]** The authors then propose an EXP style algorithm for the adversarial case. The algorithm is restricted to the linear setting. At each step, it estimates $\hat{\theta}_t$ using the reward obtained in that step, and uses it to estimate the Borda scores of all arms at that step. These scores are used to compute a distribution over arm pairs in the same spirit at EXP. This algorithm incurs a regret of $O((d \log K)^{1/3} T^{2/3})$, which matches the lower bound when $K = O(2^d)$. Experiments on real and synthetic data corroborate the theoretical findings. Strengths: Originality - The Borda regret minimisation problem for generalised linear dueling bandits is new. The authors seem to have adequately cited the related work. Quality - I have not checked the details in the appendix but the arguments presented in the main paper are inspired from well-known techniques, and appear sound to me. Barring a few issues listed in the weakness section, I believe that the claims are well supported, both theoretically and empirically via experiments on real and synthetic data. Clarity - The paper is very well written and easy to understand. In particular, I appreciate the explanation for ``Borda reduction'' and why it is not sufficient in Section 3.1. Significance - The results fill an important research gap. Weaknesses: Originality - The algorithms and their analysis have limited novelty. While I understand why Borda reduction does not trivially work, it would be helpful to have a summary of the technical challenges encountered in analysing the algorithms, while highlighting the new ideas that were employed. Quality - BEXP3 assumes a linear model for $p_{i,j}^t$ and not a generalised linear model. I think this should be clarified in the abstract. It would also be useful to have a discussion on what makes having a link function here hard. Clarity - Just two minor comments, 1. Use \citep instead of \citet wherever appropriate (e.g., L19-20) 2. Explaining the utility of using a G-optimal design in Algorithm 1 (to have a uniformly good estimate of $\theta$) will improve the readability of the paper. Significance - The authors note in their conclusion that their exploration scheme guarantees accurate estimation in all directions, thereby paving way for extensions like top-k recovery and ranking problems. However, the exploration scheme is the G-optimal design, which is not a new contribution. Outside of this, the work seems to have limited impact. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Please address the points under originality and significance in the weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: I suggest adding some discussion addressing my concern under "Quality" in the weakness section as a limitation of the analysis for BEXP3 (if applicable, I may be wrong in which case please correct me). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We appreciate that you recognize our work as “filling an important research gap”. We address your comments and questions as follows: --- **Q1**: The algorithms and their analysis have limited novelty. While I understand why Borda reduction does not trivially work, it would be helpful to have a summary of the technical challenges encountered in analyzing the algorithms while highlighting the new ideas that were employed. \ **A1**: Thank you for your suggestion. We summarize the challenges and new ideas below. + *Challenge (stochastic setting)*: Identify an effective policy to choose the pairs for comparison. \ *Solution*: To estimate the Borda score, the most sample-efficient way is to query each pair uniformly. Under the contextual linear setting, this means we need to explore each direction in $\mathbb{R}^d$ uniformly well, leading to the G-optimal design-based exploration. Meanwhile, our hard instance construction illustrates that exploration won’t help exploitation (Line 218-224). Therefore, it won’t hurt to separate exploration and exploitation. This observation leads to our simple yet efficient explore-then-commit algorithm. + *Challenge (adversarial setting)*: When applying the EXP3 framework, we have to bound the change in policy $\tilde{q}\_{t}$ at each round (Line 679), which requires an upper bound on $\hat{B}\_t(i)$. \ *Solution*: To address this, we show in Lemma 17 that the Borda score is related to the inverse matrix norm $\max\_{i,j} \\| \phi\_{i,j} \\|^2\_{Q\_t}$ and further bound it by the minimum eigenvalue $\lambda\_0$ of the uniform matrix as in Assumption 1. + *Challenge (hard instance)*: Construct the hard instance for arbitrary $d$ dimension, and prove the lower bound. \ *Solution*: The construction is based on the hardness of identifying the best arm in the $d$-dimensional linear bandit setting. To prove the lower bound, we first apply a reduction step to restrict the choice of $i_t$. Then we bound from below the regret by the expected number of sub-optimal arm pulls. The proof idea is new compared with previous works. --- **Q2**: BEXP3 assumes a linear model for p_{i,j} and not a generalized linear model. I think this should be clarified in the abstract. It would also be useful to have a discussion on what makes having a link function here hard. \ **A2**: Thank you for your suggestion. We will make it clear that BEXP3 only works for linear models in the abstract. \ Similar to the reason why EXP3-type algorithms for the adversarial generalized linear bandit are missing, the main difficulty of adding a link function to our adversarial setting is that, typically, EXP3 requires an unbiased one-sample estimator for the parameter (Line 6). For non-linear link functions, the widely used maximum-likelihood estimator is not unbiased, leading to the estimated Borda score being inaccurate. The bias cannot be sufficiently bounded with only one sample each round, and thus the regret becomes uncontrollable. This is the main difficulty why EXP3 cannot work well for generalized linear models, both for standard bandits and dueling bandits. We will add this to the discussion of the limitations of our algorithm. --- **Q3**: …However, the exploration scheme is the G-optimal design, which is not a new contribution. Outside of this, the work seems to have limited impact.\ **A3**: We do not intend to claim the G-optimal design as our contribution. As we explained in A1, our contributions include the new problem setting, the hard instance, the algorithms, and the upper bounds. \ Besides, one of the key takeaways from our paper is that Borda regret minimization is intrinsically hard, no matter if the environment is stationary or adversarial. Centered around this key observation, we derived the lower bound and two matching upper bounds. We believe this work is noteworthy to the research community especially when learning from human feedback (e.g., preferential feedback) has received increasing attention these days. --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal Comment: Thank you for taking time to respond to my questions. I had concerns about the novelty of this paper. In particular, the reward structure allows the authors to heavily borrow techniques from the generalised linear bandits (or adversarial linear bandits literature). For example, I considered the idea of needing a good estimate of $\theta$ in all directions, and using a G-optimal design to get it, to be fairly natural. Perhaps I was blinded by hindsight. Thank you for highlighting the challenges again. I have increased my score to 6. I hope that the authors will incorporate other suggestions from my comments in the next version of the manuscript. All the best and have a good day :) --- Reply to Comment 1.1.1: Title: Thank you! Comment: Thank you again for your feedback on our paper. We really appreciate your increased score, and we're committed to incorporating your suggestions into the revised version.
Summary: This paper discusses Borda regret minimization in a contextual dueling bandits scenario, where the context is given in a generalized linear form. It provides a worst-case lower bound for the stochastic and adversarial learning scenario. The authors develop for both scenarios algorithmic solutions whose asymptotic Borda regret matches these lower bounds (up to logarithmic terms). These solutions are shown to empirically outperform current state-of-the art solutions in experiments. Strengths: The paper tackles an interesting problem. It contributes a novel lower bound for the particular learning scenario and develops algorithmic solutions, which (a) come with log-optimal regret upper bounds and (b) outperform current methods on both synthetic and real-world data. In my opinion, this paper is well-written, the notation is convenient and the algorithms and proofs seem to be presented in a reader-friendly way. Weaknesses: I did not find any weaknesses while skimming the paper and the proofs. While reading, I've collected some the following typos/minor suggestions: - 170f.: $\lambda_{min}$ has not formally been introduced. - 216: Refer here to the appendix for the proof of Thm. 4. Or did you mention before that proofs are to be found in the appendix? - 240f.: Here, you call the G-optimal design $\pi^\ast$. For consistency, denote it also like this in Alg. 1 and the further discussion? - 297: Refer to Thm.4 for convenience? - 309: "we study" - 331: [the] EventTime dataset - 338: for [the/a] linear setting - 457: be satisfied - 522f.: \P_{\theta,\A} has not been defined - 522f.: C=10? - 529f.: Should averaging be done over $\theta \in \{-\Delta,\Delta\}$? Also, in 532ff. etc. - 532f.: Why is "sign" in bold? - 552f.: Regarding your identity of $V_\tau$ here, you seem to have switched from column- to rows for $\theta_{i,j}$ in comparison to l. 171f. - 568f.: For readability, formally introduce $\succeq$ - 582: $|\mathrm{supp}(\pi)|$ Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Does the hard instance in Remark 11 fulfill all of assumptions 1-3? (If trivial, at least mention it briefly for convenience.) If not, could you provide another hard instance fulfilling these, which leads to a similar bound? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors adequately addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback. We address your comments and questions as follows: --- **Q1**: While reading, I've collected the following typos/minor suggestions: … \ **A1**: Thank you so much for the suggestions! We will fix them accordingly. --- **Q2**: Does the hard instance in Remark 11 fulfill all of assumptions 1-3? (If trivial, at least mention it briefly for convenience.) If not, could you provide another hard instance fulfilling these, which leads to a similar bound? \ **A2**: The hard instance in Remark 11 satisfies all of Assumptions 1-3. The linear link function $F(x) = \frac{1+x}{2}$ automatically satisfies Assumptions 2 and 3. For Assumption 1, we have $\lambda_0 = 1/4$ so Assumption 1 holds too. --- Rebuttal Comment 1.1: Comment: Thanks for your response, I keep the score.
Summary: In dueling bandits, each pair of arms corresponds to some unknown probability $p_{i,j}$ where $p_{i,j}$ is the probability arm i is ranked higher than $j$. The learner sequentially chooses pairs of arms and receives a noisy result as to the ordering of the pair, i.e. Bernoulli$(p_{i,j})$. This paper considers dueling bandits under a generalised linear model where each pair of arms $i,j$ corresponds to a known context vector. The probability that $i$ is ranked higher $j$ is then the product of this context vector with some unknown vector $\theta^*$, passed through a link function. The goal of the learner is to pull arms with high ranking as much as possible, the regret on a single pair of arms is given as the average Borda score between the arms, and the learner aims to minimise cumulative regret. This setting has been considered previously, however, under the assumption that a hidden coherent ordering is forced on the arms. The key contribution of this paper is to relax this assumption and have essentially no constraint on the $p_{i,j}$s. In addition to the above setting, the authors consider a adversarial setting where the unknown vector $\theta^*$ is allowed to change round by round. Adversarial dueling bandits has been studied in the literature, the main contribution of this section is to extend this analysis to the contextual setting. Strengths: The paper is well written and easy to read. Sufficient treatment is given to previous works and care is taken to illustrate how the results of this paper are novel in comparison. In section 4 the class of hard instances is clearly constructed and the following discussion gives good intuition as to why explore then commit algorithms can work in this setting, which is in itself an interesting phenomenon. The results give a complete treatment of the setting, with matching upper and lower bounds, up to log terms. In their experiments the authors consider a variety of benchmarks, with application to synthetic and real world data. Weaknesses: I do not see why $\delta$ is given as input to the algorithm, it is not taken as a parameter but rather passed to the exploration phase $\tau$. The discussion in section A.1 of the appendix is vital to understanding the novelty of this work in comparison to Saha 2021, it is a shame that this section is not present in the main text. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Have the authors considered relaxed assumptions, that do not require a coherent ordering of the arms, but ensure that explore then commit algorithms cannot be optimal? For instance, a arm cannot be ranked higher than another arm whose Borda score is sufficiently higher than its own, according to some tolerance. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: For the BETC-GLM algorithm, the authors acknowledge the potential limitation of not having access to the exact G optimal design. They suggest a well known sub routine to estimate the G optimal design and describe the additional error term that would incurred by this. The authors discuss and provide compelling arguments as to whether the assumptions are reasonable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback and strong support. We address your comments and questions as follows: --- **Q1**: I do not see why $\delta$ is given as input to the algorithm, it is not taken as a parameter but rather passed to the exploration phase $\tau$. \ **A1**: Thanks for pointing this out. Indeed, $\delta$ can be removed from the input of Algorithm 1 as it only appears in $\tau$. We will fix this. --- **Q2**: The discussion in Section A.1 of the appendix is vital to understanding the novelty of this work in comparison to Saha 2021, it is a shame that this section is not present in the main text. \ **A2**: Thank you for the suggestion. Due to the space limit, we chose to leave this section in the appendix. We will re-arrange the content to fit section A.1 in the main text as per your suggestion. --- **Q3**: Have the authors considered relaxed assumptions, that do not require a coherent ordering of the arms, but ensure that explore then commit algorithms cannot be optimal? For instance, an arm cannot be ranked higher than another arm whose Borda score is sufficiently higher than its own, according to some tolerance. \ **A3**: We would like to first clarify that our work does not assume a coherent ordering/ranking of the arms. We rephrase your question as follows: If there are more structures in the problem but still there is no coherent ranking, will the ETC algorithm become not optimal? \ ***Our answer***: In general, we are not sure. But for the example you mentioned: if $B(i) - B(j) > \Delta$ where $\Delta$ is certain tolerance, then $i$ is always preferred over $j$ (i.e., $p_{ij} = 1$), it appears that a slightly modified ETC algorithm can still be optimal. Here is the argument: under your proposed assumption, if some arm has a low Borda score, its probability against all high-Borda-score arms is always $0$, so stopping exploring it early won’t change the Borda score gap among those high-Borda-score arms. Essentially, the regret is determined by those $\Delta$-near-optimal arms, instead of all arms. On the other hand, within the $\Delta$ radius, our lower bound construction will still hold by rescaling $\langle \phi_{i,j}, \theta^* \rangle$. Therefore, the order of the lower bound won’t change and thus ETC can still be optimal. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed answer to my question, I agree, finding such a setting where an ETC approach is not optimal is not obvious. Instead of a fixed tolerance $\Delta$, perhaps something like the constraint $p_{i,j} \geq exp(-\alpha\Delta_{i,j})$ where $\Delta_{i,j}$ is the gap between Borda scores of $i$ and $j$ and $\alpha>0$? After reading the other reviews I am inclined to agree that the contribution of this work is limited by the lack of novelty in the algorithms, however, providing the lower bound and showing an explore then commit approach can be optimal in this setting, is a nice result in itself. I have slightly lowered my score but still recommend the paper to be accepted.
Rebuttal 1: Rebuttal: Dear reviewers, Based on the feedback of Reviewer ECQz, we conducted an additional experiment to examine the performance of BEXP3. Please find the figure and description we provided in the uploaded PDF file. In this experiment, the number of items is $K = 64$, and the feature dimension is $d = 5$. The environment is adversarial and will alter its parameter $\theta^*_t$ (defined in Sec. 3) every 100 steps to make the algorithm’s chosen arms have the worst Borda score, introducing the largest one-step Borda regret. More specifically, we set $\theta^*\_t = \arg \min\_{\theta} B\_{\theta}(i_t)+B\_{\theta}(j_t)$. The whole simulation takes 100,000 steps, and we report the average cumulative regret over 100 runs with shaded areas near each line indicating the standard deviation. From the figure we uploaded, we can see that a non-adversarial algorithm (BETC-GLM) quickly suffers from a linear regret in the commit phase because the adversarial environment makes the committed arm the worst. DEXP3 and BEXP3 both can adapt to the adversarial environment, but our algorithm BEXP3 can also take advantage of the linear features, and thus incurs a smaller regret than DEXP3. Pdf: /pdf/16de60f09c2625417ef70cdaedfa8f3810c5cb40.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society
Accept (poster)
Summary: This paper introduces CAMEL, a role-playing framework involving two LLMs (an AI user and an AI assistant) communicating with each other to finish a specific task prompted by a human. The two agents are supposed to give instructions and provide answers respectively. On complex tasks (including AI society, code, math, and science), the proposed framework achieves better performance than a GPT-3.5-turbo single-shot baseline evaluated by both human and GPT-4. Further analysis involving fine-tuning a LLaMA model suggest the challenges and emerging behaviors in multi-agent multi-turn conversations. Strengths: 1. This paper introduces an interesting framework role-play that instead of requiring delicate human prompting, two agents are auto-prompted each other (Inception prompting) to solve a task by collaboration in a multi-turn conversation. This may add values to the research community studying complicated prompting methods to solve complex tasks. 2. Evaluation and analysis indicate challenges in multi-agent collaboration such as role flipping and conversation deviation. This can be interesting to future research involving multiple language models. Weaknesses: 1. Baseline and evaluation. The proposed method, although sounds promising and conceptually novel, is not very different from previous methods such as chain-of-thought reasoning, especially React and self-critic, where at each turn, new instructions are prompted and can be considered as "AI user" in this context. The main difference if whether one language model is employed, or two agents in a self-play setup. I agree that there may be some values in using multiple agents as shown in recent works, but using gpt-3.5 with single-shot prompting as the only baseline is not convincing. More importantly, it is not clear what prompts are used for the single-shot baseline and how the prompts are constructed. Furthermore, despite the explanation on why GPT-4 is used to summarize CAMEL before evaluation, this evaluation setting is not convincing because 1. there is no analysis on how much "hallucination" or "error propagation" is generated because of using GPT-4. In other words, the summarization may be biased by GPT-4 sampling results rather than from CAMEL itself. 2. GPT-4 is used as the evaluator (this is less an issue). 2. Many details are missing. For example, where the data and tasks are sampled from to construct the dataset and why they are used for evaluation. I would suggest the authors to specify corresponding analysis instead of "is available in the Appendix" when revising the paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Is there an inconsistency in naming? I think the proposed method is named as "role-playing" but is also referred to as "CAMEL". 2. In line 277, how do the assistant and the user know that they are stuck in a loop but are unable to break out? 3. Can you clarify the questions raised above? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have addressed the limitations and potential societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer Z1bn, Thank you for your careful review and valuable feedback. We appreciate your recognition of the novelty of our proposed framework and your insightful observations. **Responses to Strengths:** We're pleased to note your agreement on the potential of our inception prompting method and the multi-agent collaboration's challenges. Weaknesses: 1. **Baseline and evaluation.** **Response:** We respectfully disagree that the proposed method is not very different from previous methods CoT reasoning, React, and self-critic. CAMEL proposes a multi-agent framework that is very different from single agents approaches such as React and self-critic. In this paper, we showcase realizing multi-agent collaboration for task-solving. We agree that React and self-critic can be also used for task-solving. However, the main difference is that our multi-agent framework provides a broader way to create agents for task-solving. For instance, these agents can use different LLM models and tools. They can have different memory or context windows which is not possible to model with single-agent methods. Moreover, our method can be easily extended to other use cases beyond collaboration such as negotiation, competition, and so on. For games with incomplete information, it is necessary to model using a multi-agent framework. We understand your concern regarding the choice of baselines and the evaluation method. We agree that a comparison with methods such as CoT, React, and self-critic. In our revision, we aim to add these additional baselines. Here, we present additional comparisons only with Zero-CoT in the table below. | | **Draw** | **Zero-CoT Wins** | **CAMEL Agent Wins** | |------------------------|----------|-------------------|----------------------| | **GPT4 Evaluation** | 4% | 28% | **68%** | For the single-shot baseline, we used the specified task instruction and default system prompt as the prompts. We acknowledge that this might have been unclear in the paper and will make sure to clarify it in the revision. Regarding the use of GPT-4, we understand why the points were brought up. For the GPT-4 summarization, the agent is asked to extract full and complete solutions by looking at the conversation between a user and an assistant with particular specializations. The final solution should be purely based on the conversation. Since our solution generation uses gpt-3.5-turbo which had a 4K token limit (at the time), we had no option but to use GPT-4, since it has a larger token limit (8K), to extract the conversational solutions generated by the CAMEL agents to summarized ones, which aims at making the judge (GPT-4 or human) unable to tell the differences between the solutions by identifying the format of the solution. But we agree that this might not be ideal and will consider alternatives. We will include an analysis of this in the revision. **2. Many details are missing.** **Response:** We apologize for any lack of detail in the main text. We make a further clarification here: **Figure 3: Data Generation Prompts** is the prompt for generating roles and tasks for the AI society dataset. **D Data Generation Prompts for Code** is the prompts for generating languages, domains, and tasks for the Code dataset. **Appendix E Meta Data** is the metadata generated by LLMs for AI Society and Code datasets. 50 assistant roles and 50 user roles are generated for AI Society. 20 programming languages and 50 domains are generated for Code. **F Math and Science Datasets Generation Details**: The Math dataset consists of 50K problem-solution pairs. The Science dataset consists of 20K Physics problem-solution pairs, 20K Biology problem-solution pairs, and 20K Chemistry problem-solution pairs. This information will be incorporated more explicitly in the paper’s body rather than referring to the appendix. **Questions:** 1. **Inconsistency in naming?** **Response:** We would like to clarify that CAMEL refers to our overarching framework, which incorporates several techniques including role-playing, inception prompting, and the generation of multi-turn instructional datasets for fine-tuning. Among these, role-playing is a specific technique we propose within CAMEL to facilitate autonomous collaboration between the AI assistant and AI user. We apologize for any confusion caused by our initial wording and will ensure this distinction is made clear in our revised manuscript. 2. **... Stuck in a loop ...?** **Response:** An example is shown in **Appendix G Challenges - Figure 8: Challenges in Role-Playing - Infinite Conversation**: > Assistant: \ > It seems like you have already received the necessary information to develop the system. Is there anything else you need assistance with? \ > \ > User: \ > No, thank you for your help. ... \ > \ > ... \ > Assistant: \ > Goodbye! \ > \ > User: \ > Goodbye! \ > ... \ > Assistant: \ > It seems like we are **stuck in a loop**. Is there anything else I can help? \ > \ > User: \ > No, thank you. Have a great day! \ > ... The assistant and the user indeed can detect they're stuck in a loop when their responses become repetitive. However, knowing that they are stuck in the loop does not give them access to terminate the program since they are still in their role-play conversations. But this is important for designing the termination conditions. We automatically terminate the loop by checking the frequency of a list of words like termination token *"<CAMEL_TASK_Done>"*, *“thank you”* and *“you’re welcome”*, etc. We will make sure to articulate this aspect more clearly in our revised manuscript. 3. **Clarify the questions raised above?** **Response:** We hope our responses have clarified the issues raised. Once again, thank you for your constructive feedback. --- Rebuttal Comment 1.1: Comment: Thanks for the response. Regarding baseline and evaluation. I understand that CAMEL is a different framework from React and self-critic. I was mostly pointing out that those methods can be considered as prompting a single language model, whereas CAMEL uses two language models. Therefore, it is necessary to show some comparison in both methods, results, and pros and cons of each, to illustrate why CAMEL is better. Thanks for the updated results on zero-CoT, but the comparison does not seem to be fair (because there is no prompting for CoT). Methods like React are more comparable. I am thus still not convinced how much CAMEL is better than other methods (conceptually it does have benefits though). Furthermore, I agree that using GPT-4 as an evaluation metric is adapted now, but again using GPT-4 to summarize the results before doing evaluation greatly complicates the evaluation. --- Reply to Comment 1.1.1: Title: Response to comparison between CAMEL and React Comment: Dear Reviewer Z1bn, Thank you for your detailed feedback, particularly your observations regarding React and its comparability with CAMEL. To address your concerns: **Differences between React and CAMEL:** React operates on an "act and environment" paradigm, where it requires specific actions to be taken in a particular environment. In contrast, the primary experiments with CAMEL do not involve such a setup. This foundational difference makes a direct apples-to-apples comparison challenging. While React provides an interface between a language model and an environment to simulate reactions, CAMEL focuses on collaborative interactions between two models without the necessity of an explicit environment. **Possible Integrations:** Indeed, CAMEL could conceptually be comprised of two React agents, signifying that the two methodologies can be seen as orthogonal rather than as direct competitors. CAMEL's approach can be interpreted as a high-level collaborative mechanism that could potentially use React-like structures as its constituents. **Evaluation and Comparison:** We acknowledge the need to provide clearer comparisons between CAMEL and other methodologies, like React, to elucidate the unique benefits and possible drawbacks of our approach. However, we are not sure how to set up a direct apples-to-apples comparison with React. If you have some specific suggestions in this regard, we will be happy to work towards providing a more thorough analysis. Your feedback underscores the importance of making these distinctions clear and motivates us to refine our paper to better articulate these points. We genuinely appreciate your insights and will strive to address them comprehensively in our revisions. Warm regards, NeurIPS 2023 Conference Submission3376 Authors
Summary: # Summary ## Motivation Completing tasks by human-in-the-loop is time-consuming. An alternative is to let autonomous agents cooperate to solve tasks. ## Approach This paper propose to let two LLM agents *role-play* a user and an assistant to solve tasks. Their data collection approach follows the following steps: 1. For each dataset, they generate a number of user roles, a number of assistant roles and a number of tasks of each combination. 2. For each user, assistant, task combination, they generate the conversation by prompting both roles to iteratively generating instruction and solutions, and doing critic-in-the-loop to improve generations at each step. They fine-tuned 7B LLaMa models on different combination of datasets. The comparison between models are judged by GPT-4, whose accuracy is validated by small sample human evaluation. ## Results First, CAMEL (the proposed role-play method between two LLMs) works better than gpt-3.5 as a single model. Second, in most cases each generated dataset brings improvement to finetuning. Third, on third-party code datasets HumanEval+, they show that the finetuned small models works better Vicuna. ## Contribution 1. This paper proposes a framework and prompt methods for solving tasks with multiple LLMs with different roles. 2. This paper generates conversation datasets in solving these tasks, which will be helpful for future research. 3. This paper fine-tuned smaller model and showcased the performance on an out-of-domain code generation dataset. Strengths: # Originality Using a team of LLMs or generally NNs to solve a task is an interesting problem, but understudied in practical applications. This paper is innovative in that the generated tasks are in general domain, but also related to the agent roles engaged in the tasks. # Quality and Clarity As LLMs become accessible, the methods in this can be almost replicated by putting the prompts into the playground. This paper is clearly written, although not self-contained due to the shortage of space, e.g. the description of critic-in-the-loop model is in the appendix, though very important. # Significance The results in this paper are significant, except for the unclear parts in experiments (see questions). Due to the unavailability of good metrics when comparing models for general tasks, this paper uses GPT-4 for evaluation. This practice is widely used and has various issues raised by recent papers. Validating the results by doing small sample human evaluation is a reasonable choice. Weaknesses: 1. The "large scale language model society" can be misleading. In the whole paper, the conversations are between two agents with different roles. Readers might expect more agents to participate in decision-making. 2. It is unclear why this approach is different from hierarchical decision-making with the user as the high-level planner, and assistant as the low-level executor. The planner gives high-level instructions, while the low-level executor generates solutions. It is true that there are not many papers on LLMs for hierarchical decision making, however, there is a recent one [1]. This is not essentially a reason to reject this paper, but the authors should make the connection clearer. [1] Hierarchical Prompting Assists Large Language Model on Web Navigation Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. Section 5.1, Fig. 4. are the CAMEL agents here GPT-4 based or the fine-tuned model mentioned in 5.2? If it is GPT-4, would the comparison with get-3.5-turbo be unfair? 2. Section 5.3, on which datasets is the CAMEL-7B trained on? All datasets or just code? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer 6oPa, Thank you for your thoughtful review and positive feedback on our paper. We're pleased to hear the originality and significance of our work is recognized. Below, we address your questions and concerns: **Strengths:** > 1. **Originality** - Using a team of LLMs or generally NNs to solve a task is an interesting problem, but understudied in practical applications. This paper is innovative in that the generated tasks are in the general domain, but also related to the agent roles engaged in the tasks. \ > 2. **Quality and Clarity** - As LLMs become accessible, the methods in this can be almost replicated by putting the prompts into the playground. This paper is clearly written, although not self-contained due to the shortage of space, e.g. the description of the critic-in-the-loop model is in the appendix, though very important. \ > 3. **Significance** - The results in this paper are significant, except for the unclear parts of the experiments (see questions). Due to the unavailability of good metrics when comparing models for general tasks, this paper uses GPT-4 for evaluation. This practice is widely used and has various issues raised by recent papers. Validating the results by doing a small sample human evaluation is a reasonable choice. **Response to Strengths:** We appreciate your acknowledgment of the originality, quality, clarity, and significance of our work. We recognize that the description of the critic-in-the-loop model was limited due to space constraints, and we will strive to provide more clarity within the main body of the paper. **Weaknesses:** > 1. The "large-scale language model society" can be misleading. In the whole paper, the conversations are between two agents with different roles. Readers might expect more agents to participate in decision-making. **Response:** We apologize if the term was misleading. Our focus was to demonstrate how a smaller group of agents with well-defined roles could collaboratively solve a task. These LLM agents play different roles that human society has. In the AI society dataset, we collect 50 * 50 pairs of agents with different society roles. In the appendix (P Critic-In-The-Loop), we also show a small society with three agents: a user agent as a Postdoc, an assistant agent as a Ph.D. student, and a critic agent as a Professor. We understand that readers might expect a larger society of agents, and we will work on clarifying our approach. > 2. It is unclear why this approach is different from hierarchical decision-making with the user as the high-level planner, and the assistant as the low-level executor. The planner gives high-level instructions, while the low-level executor generates solutions. It is true that there are not many papers on LLMs for hierarchical decision-making, however, there is a recent one [1]. This is not essentially a reason to reject this paper, but the authors should make the connection clearer. \ > [1] Hierarchical Prompting Assists Large Language Model on Web Navigation **Response:** We agree with your comparison of our approach to hierarchical decision-making. Thanks for pointing out the missing reference [1]. This paper is indeed relevant but it came out one week after the NeurIPS submission deadline. So we were not able to cover it. The suggested paper [1] proposes a novel hierarchical prompting Actor-Summarizer-Hierarchical (ASH) for web navigation. The action is generated based on a summarized observation by a summarizer instead of the raw observation. The hierarchical modularized design reduces the heavy reasoning burden and improves performance significantly on Webshop tasks [2]. We agree there are some similarities in terms of reducing the difficulty of reasoning complex tasks in both methods. Our intention was to highlight how specific role-playing with multiple agents can lead to a more cooperative solution. We will strengthen this connection by discussing the similarities and differences with the referenced paper on hierarchical prompting and explaining our unique contribution to the field. We also found a more recent (25 Jul 2023) follow-up work WebArena [3] on using autonomous agents for web navigation to be impressive. [2] Shunyu Yao, Howard Chen, John Yang, and Karthik Narasimhan. 2022a. Webshop: Towards scalable real-world web interaction with grounded language agents. arXiv preprint arXiv:2207.01206. [3] Zhou, S., Xu, F.F., Zhu, H., Zhou, X., Lo, R., Sridhar, A., Cheng, X., Bisk, Y., Fried, D., Alon, U. and Neubig, G., 2023. WebArena: A Realistic Web Environment for Building Autonomous Agents. arXiv preprint arXiv:2307.13854. **Questions:** > 1. Section 5.1, Fig. 4 are the CAMEL agents here GPT-4 based or the fine-tuned model mentioned in 5.2? If it is GPT-4, would the comparison with get-3.5-turbo be unfair? **Response:** We apologize for the confusion. The CAMEL agents used in our experiment were based on gpt-3.5-turbo for role-playing (See line 288), not GPT-4. GPT-4 is used as a judge for the evaluation. We also use GPT-4, since it has a larger token limit, to extract the conversational solutions generated by the CAMEL agents to summarized ones, which is aiming at making the judge (GPT-4 or human) not able to tell the differences between the solutions by the formats. We will further clarify this aspect in our revision. > 2. Section 5.3, on which datasets are the CAMEL-7B trained? All datasets or just code? **Response:** CAMEL-7B was trained on all datasets (AI Society + Code + Math + Science), not just code, which is similar to how Vicuna was trained on diverse datasets that do not include only code. We will include clearer information about this in the revised paper. Once again, thank you for your insightful comments. They will be of great assistance as we refine our work in the final version. --- Rebuttal Comment 1.1: Title: Reply to authors Comment: Thanks authors for their detailed rebuttal. I think the response answers most questions that I had. I have also read the comments by other reviewers and still lean towards accepting this paper. Is it possible to change the "large-scale language model society" in the title? It is still confusing as whether the language models are large-scale or the society is large-scale. A candidate is "large language model collaboration". --- Reply to Comment 1.1.1: Comment: Dear Reviewer 6oPa, Thank you for your thoughtful feedback and for considering our detailed rebuttal. We're pleased to know that our responses addressed most of your concerns. In regard to your suggestion about the title, we understand your perspective on the ambiguity it presents. We agree that clarity in the title is crucial, and "large language model collaboration" does offer a more direct understanding of the content of the paper. We will ensure that the title is revised to better capture the essence of our work without causing any confusion. Once again, we appreciate your constructive feedback and your inclination toward accepting our paper. Your insights have been invaluable in guiding our revisions, and we are grateful for your continued support. Warm regards, NeurIPS 2023 Conference Submission3376 Authors
Summary: This paper attempts to address a dilemma in leveraging large-language models (LLMs) for solving complex tasks in a collaborative setting: the question of oft-needed human intervention in the equation. More specifically, the authors have come up with an intuitive and novel cooperative agent framework called role-playing that supports effective task completions by collaboration dialogues between agents (LLMs) without extensive human interventions except at inception. Their framework also offers a scalable way to investigate and refine the collaborative capacities of multi- agent systems and they provide detailed analysis and resolution strategies for challenges that come up in such a to- and-fro LLM-instruction scenario. They evaluate their agents exhaustively with state-of-the-art LLMs like GPT 4 as well as with human intelligence. Moreover, they demonstrate task-specific emergence capabilities of smaller-sized LLMs like Llama using their generated user-instruction datasets for various scenarios and domains. Contributions: Datasets/Libraries 1. Their publicly available library provides modular functionality and includes implementations of different agents, examples of well-crafted prompts, and data explorers. 2. Two large conversational, task-oriented, and instruction-following datasets: AI Society and Code. 3. Math and Science dataset (QA) and Misalignment dataset (contains simulations of potential risks of such an uncontrolled autonomous system) 4. These datasets will help investigate other larger language models, allowing such LLMs to communicate more effectively with human agents. Strengths: The proposed method (inception prompting) is intuitive, novel, and well-motivated for a novel collaborative task-solving using LLM-agents. The paper is mostly easy to follow. For instance, Fig. 2 is pretty detailed when it comes to roles and task assignments. The supplementary material, appendices, and the libraries provided can be crucial for future works in this direction. The methods section is scientifically sound with effective strategies being discussed for resolving the unaligned idea flows from a role perspective. The proposed framework can be used to evaluate collaborative problem-solving in crucial domains like classroom learning and education. Weaknesses: 1. Role/task alignment: For the specific task of building an app for stock trading via analyzing sentiments of certain stocks, it seems that the role assigned to the AI user (stock trader) and the message it generates seems counterintuitive since you wouldn’t generally expect stock traders to know environment variables being the first step towards solving this specific task. Instead, a role (like say tech lead/ tech supervisor) seems more fit for the AI user in that example. 2. Fig. 2 confusion: Fig.2 is a little confusing regarding which agent receives the task-specifier prompt (i.e., at the inception level). If I am reading correctly, the multi-agent scenario seems to include two LLMs, but it is not clear who received the starting ‘task-specifier’ prompt immediately. It’ll perhaps improve the readability if the authors can provide some clarity regarding this either in sec. 3.2 or Fig 2 3. Long-distance memory issues: in section 4.1, the authors provide a set of termination conditions that brute- force the collaborative mutual dialogues between the agents. Is it not entirely clear what exactly the authors mean when they claim 40 to be the max-limit of messages to ensure enough length in the conversation history. Was it chosen purely from a pricing perspective? I think the paper can benefit from having some more analysis from a cost/compute perspective. Also, did the authors come across any cases where the agents tended to effectively forget the previous thread of instruction flow (thus leading to sub-optimized task results) because of long-range memory issues that tend to affect such LLM-based agents? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: (Also suggestions) 1. Sec 3.2 makes note of cases of role-switching being observed between the user and the assistant. Are those cases of LLM hallucination anyway correlated with the framing of the prompt? Does it affect certain tasks more than others or is it observed usually at specific points in the sequence of collaborative dialogues? 2. The authors precisely point out challenges such as instruction repetition, flake replies, and infinite loops of messages with examples in the appendices. However, one way of improving the analysis of the paper would be to have a plot/diagram with more details about the distribution of such cases (frequency, places where they tend to occur in the sequence of dialogues, etc). Also, what methods were used to stop/terminate such cases of looping/role-flipping, was it human intervention or did the authors use flags similar to the task termination conditions? 3. On section 5.2, which explains the methods for evaluating ‘emergence’ capacities in smaller LLMs like Llama, what was the motivation behind the sequence of various domains (math, science etc) being a specific way and not any other? Moreover, Table 1 can also benefit from some clarification about what exactly Model 1 and Model 2 are. Although the authors have pointed to sec 5.1 for the evaluation in sec 5.2, it is unclear which one is Llama variant. Also, there might be a case for further experiments with other relatively smaller-sized instruction-based LLMs like T0 [3], FLAN [2] or InstructGPT [3] Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors have explicitly mentioned the limitations and have provided reasonable strategies to avoid any harmful (pricing/social or otherwise) consequences of their work. However, the paper might benefit from a cost- compute analysis in generating the collaborative datasets using LLMs, perhaps as a paragraph in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer wYFm: **Response to Strengths:** We appreciate your positive comments on the novelty, motivation, and potential benefit for future work. **Weaknesses:** 1. **Response:** We acknowledge the reviewer's observation about the role of the AI user (stock trader). The decision to use the 'stock trader' role was based on the assumption that the AI user stock trader has domain knowledge about stock trading and some basic knowledge of Python. So that the stock trader can collaborate with a Python programmer to achieve the task with its domain knowledge. We see the merit of having the suggested role of 'tech lead/tech supervisor'. Inspired by your suggestion, we propose a more realistic and aligned workflow: Stage 1: A stock trader (AI User) collaborates with a tech lead (AI Assistant) to figure out a detailed implementation plan for developing a trading bot for the stock market. Stage 2: A tech Lead (AI User) then collaborates with a Python programmer (AI Assistant) on implementing the obtained plan. (See the PDF in the global response for the details) In this two-stage approach, we see the implementation is more systematic and well-rounded, which includes the designs of Data Storage, Risk Management, Security, and even Testing and Deployment. What would be even more interesting is to introduce an automatic role-generation agent that can generate roles for a given task. We will add the above discussion in the final manuscript. 2. **Response:** We will clarify this in section 3.2 and enhance Fig. 2 to better illustrate the inception prompts. As mentioned in 3.1 Role-playing Framework - Human Input and Task Specifying. The 'task-specifier' prompt is the system prompt for the task specifier agent which receives a preliminary idea and roles from Human input and produces a specified task. The task specifier agent acts as a brainstorming module to help with producing a specified task for the AI User and AI Assistant agents. 3. **Response:** Our decision to limit the number of messages to 40 was mainly cost-related. Even if we provide a set of termination conditions, we still want to put a safeguard to the max-limit of the message. It is because after the task is completed the agents will provide short outputs like "thank you" and "welcome". If no safeguard is set and termination fails, the conversation will only end until it exceeds the token limit, which may end up with thousands of API calls and hundreds of USD dollars cost. As for forgetting the previous thread of instruction flow, it is mainly related to agents asking for information that would not be able to perform since the lack of embodiment or physical information such as date, emails, files, location, etc. For instance, an AI user agent asks an AI assistant agent to book a meeting schedule in its calendar. However, the AI assistant agent does not ask for access to the AI user agent's calendar. Then the AI assistant agent will ask for the AI user agent's calendar access. However, we did not provide calendar API accesses to the AI user which cause the forgetting of the thread of instruction flow. This could be solved by providing API access to the embodiment or physical information. In the appendix (O Embodied Agent), we show an example that provides image generation API to an embodied agent which enables it to generate images. **Questions:** 1. **Response:** We indeed noticed that role-switching can sometimes be task-dependent. As mentioned above, it is mainly related to agents asking for information that would not be able to perform since the lack of embodiment or physical information such as date, emails, files, location, etc. (See the response to Weaknesses 3) 2. **Response:** A dataset analysis is provided in the appendix (Dataset Analysis) including some plots for the distribution of these cases. Please see Figures 9 and 10 for the analysis of the distribution of conversation termination reasons and Figure 11 for the flake message distribution. Yes, the checking is similar to the task termination conditions. Role flipping is checked when the assistant instructs the user with keywords like "Instruction" and looping is checked frequency of a list of words like “thank you” and “you’re welcome” etc. 3. **Response:** The sequence of domains was initially ordered by the time orders of the collection which is relatively arbitrary. We will clarify the details in Table 1. Model 1 and Model 2 are both LLaMA 7B-based models which are trained on different datasets. For instance, LLaMA-7B is the vanilla model, AI Society and AI Society + Code + Math + Science are models trained on the AI Society dataset and a combination of datasets (AI Society + Code + Math + Science) respectively. Moreover, we appreciate your suggestion of comparing with other LLMs like T0 [1], FLAN [2], and InstructGPT [3]. Below we provide results that show the emergence of knowledge of AI Society-related concepts of a FlanT5 Model. Fine-tuning a FlanT5 on our AI Society data improves the performance of the model on Society related tasks. Additionally, we compare FlanT5 fine-tuned on AI Society data with a LLaMA-7B fine-tuned on the same data and find that they achieve very similar scores with FlanT5 performing slightly better. | **Dataset** | **Model 1** | **Model 2** | **Draw** | **Model 1 Wins** | **Model 2 Wins** | |---|---|---|---|---|---| | **AI Society** | FlanT5 | FlanT5 (+AI Society) | 1 | 0 | **19** | | **AI Society** | FlanT5 (+AI Society) | LLaMA-7B (+AI Society) | 2 | **10** | 8 | **Limitations: Cost-compute analysis ...** **Response:** We'll add a paragraph addressing this in the revision. Generating all four datasets (AI Society, Code, Math, and Science) cost around 10,000 USD. Once again, sincerely thank you for your constructive feedback! --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. I still recommend acceptance for this paper, but I would like to point out that the change to the described workflow could entail a significant rewrite so I would encourage you to make the revisions as surgically as possible. As my recommendation is based on the submitted paper, too significant an overhaul to the framing would invalidate reviews based on the original submission. --- Reply to Comment 1.1.1: Comment: Dear Reviewer wYFm, Thank you for your constructive feedback and your recommendation for acceptance. We deeply appreciate your recognition of our work. We understand and value your concern regarding the extent of revisions. Our intent is to address the comments you and other reviewers have raised without changing the fundamental framing of our paper. We assure you that the core content and essence of the paper will remain intact, and any alterations will be done with precision and caution. Your point about the balance between refining the paper and maintaining its original framing is well-taken. We will add a brief description of the workflow and ensure that our revisions are "surgical", targeting specific areas for improvement while retaining the primary structure and content that formed the basis of the original reviews. Once again, we appreciate your valuable insights and your recommendation. We are committed to ensuring that our paper maintains its integrity while addressing the concerns raised. Warm regards, NeurIPS 2023 Conference Submission3376 Authors
null
null
Rebuttal 1: Rebuttal: To all esteemed reviewers of our NeurIPS 2023 submission, We express our sincere gratitude for your thorough and insightful reviews of our manuscript. The detailed feedback from each of you provides us with clear guidance on how to refine and improve our work. **Reviewer wYFm:** Thank you for recognizing the novelty and intuitiveness of the proposed inception prompting method. Your positive feedback on the clarity of the paper and the potential of supplementary materials for future works is greatly appreciated. We acknowledge the concerns raised about the role/task alignment and the confusion surrounding Fig. 2. Your keen observations about long-distance memory issues with LLM-based agents are especially valuable and will be addressed in our revisions. We attach a **.pdf** file of a two-staged example that addresses the role/task alignment weakness based on your recommendation. The first stage is between a stock trader and a tech lead, and the second stage is between a tech lead and a Python programmer. > Stage 1: Tech Lead (AI Assistant) v.s. Stock Trader (AI User) \ > Task: Figure out an implementation plan for developing a trading bot for the stock market. > Stage 2: Python programmer (AI Assistant) v.s. Tech Lead (AI User) \ > Task: Develop a trading bot for the stock market. {Plan obtained from Stage 1}. We also added an example to show the emergence of knowledge of AI Society-related concepts of a FlanT5 Model. Fine-tuning a FlanT5 on our AI Society data improves the performance of the model on Society related tasks. Additionally, we compare FlanT5 fine-tuned on AI Society data with a LLaMA-7B fine-tuned on the same data and find that they achieve very similar scores with FlanT5 performing slightly better. | **Dataset** | **Model 1** | **Model 2** | **Draw** | **Model 1 Wins** | **Model 2 Wins** | |---|---|---|---|---|---| | **AI Society** | FlanT5 | FlanT5 (+AI Society) | 1 | 0 | **19** | | **AI Society** | FlanT5 (+AI Society) | LLaMA-7B (+AI Society) | 2 | **10** | 8 | **Reviewer 6oPa:** Your feedback regarding the originality of our approach and its potential significance in practical applications is very encouraging. We appreciate your recognition of the clarity of our paper and the potential replicability of the methods. We are attentive to your concerns about the term "large scale language model society" and the need for clarity in distinguishing our approach from hierarchical decision-making. We agree that the mentioned reference [1] and our proposed method have some similarities in terms of reducing the difficulty of reasoning complex tasks. Their connection and discussion will be added to our revisions. [1] Hierarchical Prompting Assists Large Language Model on Web Navigation **Reviewer Z1bn:** We are grateful for your acknowledgment of the innovative nature of our framework and its potential value to the research community. Your observations on the challenges of multi-agent collaboration and the potential issues in our evaluation methods are well-taken. We will certainly delve deeper into the points raised about baselines, evaluation, and the missing details in our paper. We understand your concern regarding the choice of baselines and the evaluation method. We agree that a comparison with methods such as CoT, React, and self-critic. In our revision, we aim to add these additional baselines. Here, we present additional comparisons only with Zero-CoT in the table below: | | **Draw** | **Zero-CoT Wins** | **CAMEL Agent Wins** | |------------------------|----------|-------------------|----------------------| | **GPT4 Evaluation** | 4% | 28% | **68%** | In conclusion, we are truly thankful for the time and effort each reviewer dedicated to evaluating our submission. Your feedback is instrumental in guiding us toward improving the quality and impact of our research. We look forward to addressing all raised concerns in our revised manuscript. Warm regards, NeurIPS 2023 Conference Submission3376 Authors Pdf: /pdf/77d367c2442ea4bb7a15a24e140e509410a59a4a.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
CoVR: Learning Composed Video Retrieval from Web Video Captions
Reject
Summary: This paper focuses on the task of Compositional Video Retrieval (CoVR) in which given a video and a text which modifies the video aims to rank and retrieve the modified video. In this work, a new large-scale dataset for pre-training (named WebVid-CoVR) is developed with generated modifications from a Large Language Model (LLM). Additionally, a smaller dataset for evaluation is manually annotated. Alongside this, a method is proposed that follows HH-NCE to learn the video compositions. Experiments show that the proposed method works well in both the supervised and zero-shot cases, and even translates well to Compositional Image Retrieval (CoIR) datasets CIRR and FashionIQ. Strengths: The created dataset(s) looks like it will be a useful addition to vision-language pre-training and especially for the Composed Video Retrieval task which currently doesn't have a dataset to train on. The methodology of creating the dataset is well-done and all the steps make sense, especially for the manually annotated portion of the test set. The paper is generally well-written and easy to follow. Weaknesses: Line 137: Is the Top K sampling referring to the tokens within a modification text? If not it's not clear what this means as only a single modification is generated as specified on Line 138. Regarding the rule-based ablation in Table 6, could the generated rules be paraphrased by a LLM? In this case, the MTG-LLM wouldn't necessarily need to be fine-tuned and a standard LLM could be used instead. Line 164: The paper mentions that WebVid-CoVR is noisy which is why a subset is manually annotated for the test set. What is the main source of noise within the main dataset? Is this from the generated modifications? The captions not matching the videos or something else? Has a human study shown how much of the dataset can be considered noisy from looking at a small number of samples? \alpha and \tau are learnable parameters? Normally, \tau at least, is a set hyperparameter. How these are learnt aren't mentioned within the text. Has any analysis been performed into the types of modifications within the dataset? I.e. noun/adjective/verb changes and others? It is mentioned within the limitations that certain modifications may not have been generated and it would be good to see what coverage there could be - even if this is from a human annotated subset. Technical Quality: 3 good Clarity: 3 good Questions for Authors: For more context regarding the questions, please see the weaknesses section above. 1. Has any analysis been performed into the types of modifications within the dataset? 2. What is the main source of noise within the main dataset? 3. How are \alpha and \tau learnt? 4. How does learning \tau compare to keeping it a fixed value as is normally done in practice? How does \tau change during training? 5. Could the generated rules be paraphrased by a LLM? Would this change the performance within Table 6? 6. Is the Top K sampling referring to the tokens within a modification text? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: There is a short section on limitations which mentions the scope of modifications within the dataset, I think this could be improved by performing some analysis on the generated prompts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. Has any analysis been performed into the types of modifications within the dataset?** We analyze the number of words in the modification text in Figure A.2 of the supplementary material, but in Figure R.3 of the rebuttal PDF we provide further analysis on the distribution of noun/adjective/verb as suggested by the reviewer. We also include a visualization of the verb-noun frequency heatmap in Figure R.4, which provides insights into the distribution of verb-noun count combinations across our dataset. We also conducted an analysis using part-of-speech tagging on the captions. The resulting visual representation, presented in Figure R.6 of the rebuttal PDF, illustrates the transition of POS tags across the difference words in Caption 1 and Caption 2. Finally, indeed, we mentioned within the limitations that certain modifications may not have been generated. We give examples for a couple such scenarios in our response to `Reviewer #4ap2` with the title *“4. Examples of other types of modifications not captured by single-word differences”*. --- --- **2. What is the main source of noise within the main dataset?** As mentioned in L178, about 23% of the automatic collection can be considered as noisy, because this was the percentage of discarded triplets when manually curating the WebVid-CoVR$_m$ test set. We expect a similar noise ratio in the training set. To address the reviewer’s question more in detail, we manually went over the triplet examples that were marked as unsuitable (therefore discarded) when annotating test set. We marked whether the reason for discarding falls within any of the following categories, and computed the following percentages (normalized by the number of discarded triplets). * 35%: The generated modification text does not describe the visual difference. Primarily attributed to either the quality of the video captions or the output generated by the MTG-LLM. * 28%: Paired videos are visually too similar. * 15%: Paired videos are visually too different. * 13%: At least one of the videos is difficult to understand/low quality. * 9%: Captions are too similar (e.g., one-word difference doesn’t change the meaning: “On the chairlift” and “Ride the chairlift”). While the first category of errors is the largest, it is important to also note that our strict standards for the test set necessitated the discarding of many triplets that could potentially be useful for training. --- --- **3. How are $\alpha$ and $\tau$ learnt?** We use $\tau$ in the same way as BLIP, i.e., as a learnable parameter clamped between 0.001 and 0.5 with initial value of 0.07. For $\alpha$, we set the initialize the learnable parameter with 0.1. We will mention this in the text. --- --- **4. How does learning $\tau$ compare to keeping it a fixed value as is normally done in practice? How does $\tau$ change during training?** It is worth noting that the adoption of learned temperature parameters is not unprecedented. Both the BLIP and CLIP models also incorporate a learned temperature parameter as part of their architectures. The initial value of $\tau$ is set to 0.07 and it decreases monotonically down to **0.06794**, i.e., in practice the value does not change much. We suspect that the scale of the parameter updates is not large enough. To mitigate the small updates, we also investigate learning $\tau / 100$ and then multiplying the learned value by 100 to have more granularity. If we do this, we initialize $\tau$ at 0.0007*100= 0.07 and it decreases until 0.0003835 * 100 = **0.03835** over training iterations and it stabilizes. We plot the curve in Figure R.5 of the rebuttal PDF, and report the WebVid-CovR results with/without learning $\tau$, as well as learning with this division trick. | Tau | R@1 | R@5 | R@10 | R@50 | |---|---|---|---|---| | Not learned | 54.70 | 81.07 | 88.42 | 98.11 | | Learned | 54.87 | 80.99 | 88.30 | 98.11 | | Learned (/100) | **55.15** | **81.27** | **89.28** | **98.28** | Table R.4 --- --- **5. Could we paraphrase the rule-based generated texts with a standard LLM to avoid finetuning MTG-LLM?** We thank the reviewer for the suggestion. To answer this question, we investigated the possibility of paraphrasing the generated rule-based modification texts using an LLM. Initially, we experimented with LLaMA and LLaMA 2 for paraphrasing, but the results were qualitatively unsatisfactory. For instance, the output generated by LLaMA was overly verbose and not suitable for the CoVR task. However, we found that employing gpt-3.5-turbo from OpenAI yielded significantly improved paraphrased responses. Paraphrasing the rule-base examples significantly boosts the results (from 43 to 53 R@1) at the cost of running an expensive LLM ($43 cost for this experiment for 1 paraphrasing per modification text on the entire dataset). On the other hand, our finetuning of the MTG-LLM, which is highly cost-effective (only 1 epoch and 715 text examples), leads to overall better results. | Type | R@1 | R@5 | R@10 | R@50 | |---|---|---|---|---| | Rule based | 43.00 | 70.10 | 79.38 | 94.58 | | Rule based paraphrased with GPT-3.5 | 53.45 | 79.64 | 87.19 | 97.70 | | Finetuned MTG-LLM with LLaMA | **54.87** | **80.99** | **88.30** | **98.11** | Table R.5 Given these observations, we believe that finetuning the MTG-LLM is a preferable approach, as it outperforms gpt-3.5-turbo in terms of cost-effectiveness. --- --- **6. Is the Top K sampling referring to the tokens within a modification text?** Yes, top-k sampling refers to the tokens within a modification text. As described and introduced in Fan et al. [2018], top-k sampling can generate more diverse outputs than beam search thanks to its randomness (e.g., “beam search produces common phrases and repetitive text from the training set”). However, the reviewer is correct that we only generate one modification text per caption pair in our study. We will clarify L136 by removing *“To increase the diversity of the generated samples”*. --- Rebuttal Comment 1.1: Title: Thank you for the clarifications Comment: Thank you for providing detailed responses to my initial review and addressing my questions. After reading these and the other reviewers' comments I am still in favour of accepting this paper.
Summary: The paper addresses the challenge of composed video retrieval, which involves querying a video and a modification text to find videos that exhibit similar visual characteristics with the desired modification. The main challenge is the lack of data for composed video retrieval. To overcome this, the paper proposes mining paired videos with similar captions from a large database and generating the corresponding modification text using a large language model. The paper explains the BLIP-based video and text encoder and the training process for the model using the collected data. Experimental results demonstrate that the model trained on the compiled dataset can generalize to both zero-shot and fine-tuning settings. Strengths: - The paper introduces a novel task of video retrieval and establishes a benchmark for future research in this area. - Overall, the paper is well-written, clear, and easy to follow. Weaknesses: - Examples in Figure 3 suggest that most samples do not require handling dynamic content, implying that there may not be a significant difference between the proposed CoVR task and existing CoIR tasks. - The MTG-LLM method requires manually created data for fine-tuning, which can be resource-intensive. - The training method is standard and not particularly innovative, although it is reasonable for the task. - The training data is limited to modifications that can be represented with single-word differences, potentially excluding other types of modifications. This point is mentioned as a limitation in the paper. By providing concrete examples of scenarios not addressed by this work, readers will understand the challenge clearly. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What annotations are added to the dataset? (line 145) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The paper mentions two limitations. - The data creation pipeline may not adequately capture some visible changes due to its design. - The generation of modification text may not be optimal as the text generation depends only on input captions. I appreciate the authors for pointing out the meaningful limitations and suggesting ideas for future research. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. Dynamic vs static content in CoVR.** We acknowledge that, as the reviewer pointed out, some videos can be retrieved by only looking at one frame. This is a common problem in video datasets as highlighted by Jie et al. [“Revealing Single Frame Bias for Video-and-Language Learning”, ACL 2023]. However, our results in Table 2 show that using multiple frames benefits the CoVR performance. Here, we conduct a further analysis on the training set of WebVid-CoVR based on optical flow to detect the frequency of static videos in our data. We computed the optical flow using the Gunnar Farneback's algorithm and empirically chose a magnitude threshold of 1 to distinguish between videos with static and dynamic elements. The magnitude value is obtained by averaging the Euclidean norms of motion vectors in both horizontal and vertical directions across the computed video frames. We identified that around 25% of the triplets contain static target videos, which represents approximately 21% of the overall target videos. In the table below, we show a minor decrease in performance if we omit these static videos during training (while maintaining the same iteration count). This may be because image training data can still be complementary to video training [4]. In Figure R.2 of the rebuttal PDF, we also illustrate visual examples to motivate the use of multiple frames. We show two videos (one detected to be static based on optical flow, the other dynamic), where multiple frames are necessary to associate the visual content to the video caption (e.g., for the “timelapse” concept, and the action “hiding”). | | Percentage of data | R@1 | R@5 | R@10 | R@50 | |----------------|--------------------|-------|-------|-------|-------| | Static | 25 | 50.99 | 77.46 | 85.92 | 97.45 | | Dynamic | 75 | 54.11 | 80.46 | 86.95 | 97.82 | | Static+Dynamic | 100 | **54.87** | **80.99** | **88.30** | **98.11** | Table R.3: Training with static or dynamic partitions of WebVid-CoVR. --- --- **2. MTG-LLM requires manually created data for finetuning.** While it is true that our approach involves using 715 text triplets, it's important to note that we repurpose pre-existing data created by InstructPix2Pix in the context of text-conditioned image editing [8]. Furthermore, the data creation process for text triplets is quite fast considering it involves only 715 text samples. It is worth noting that, despite the manual data we used for the MTG-LLM, we provide two alternatives where no manually data is required: a rule-based approach in Table 6 and a prompting technique without finetuning in Table A.3. However, the rule-based and prompting methods exhibit lower results in comparison to MTG-LLM, highlighting the effectiveness of finetuning the MTG-LLM. We anticipate that future advancements in language models may enhance the effectiveness of the prompting technique further, removing the need to finetune. --- --- **3. Training method is standard.** We agree with the reviewer that the training method is standard, we do not claim novelty in this component, and on the contrary build our focus towards the data generation methodology. To reiterate, our contribution lies in the development of a scalable approach that enables the automatic generation of CoVR training data directly from Web video captions. This innovative approach not only facilitates efficient data collection but also addresses the challenge of acquiring diverse and relevant training samples. Additionally, we have introduced a new manually-curated benchmark to evaluate the CoVR task. --- --- **4. Examples of other types of modifications not captured by single-word differences.** As requested by the reviewer, we provide concrete examples of scenarios potentially excluded due to detecting one-word difference between caption pairs. * Multiple modifications: We cannot find examples where multiple aspects change at once. For example, the following example from the CIRR dataset is not captured, as two things are changed between query and target image: “The target photo is a close up of a similar dog, but it is swimming on its own with a tennis ball in its mouth.” * In the following example, the difference between the two captions is more than one word (“empty”, “and kids playing”), so we discard this pair. However the target could be formulated as “add kids playing”. * $caption_1$: An empty park with green trees. * $caption_2$: A park with green trees and kids playing. We would like to mention that, in preliminary analysis, we also explored pairing captions via text embedding similarity (instead of one-word difference), but our qualitative results showed that such similarity metric is too noisy to be used to detect caption pairs that differ by an easily describable modification. For example, some would require also checking the visuals to determine the modification (e.g., whether the park is empty in both videos in the above example). Instead, with our simple approach, we avoid this noise, and already obtain more than a million triplets. --- --- **5. What annotations are added to the dataset?** The full list of added examples can be seen in Table A.2 of the supplementary material. We will add a pointer from L145. --- Rebuttal Comment 1.1: Title: Thanks for your responses Comment: Thank you for your responses. The rebuttal addressed my concerns, and I remain positive about this paper.
Summary: This paper proposes a scalable approach to automatically generate composed visual retrieval training data. Specifically, based on the WebVid2M dataset, the authors generates a WebVid-CoVR training dataset with 1.6M CoVR triplets. Strengths: The data augmentation strategy is scalable. Weaknesses: The overhead for the dataset augmentation should be detailed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The overhead for the dataset augmentation should be detailed. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The overhead for the dataset augmentation should be detailed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **What is the dataset augmentation overhead?** We outline the detailed computation time for each step of the dataset generation. The computation times below are obtained using a *single* NVIDIA RTX A6000, but it is important to note that most of the processes can be parallelized, which would significantly reduce the wallclock time required. In practice, we used 2 GPUs. * **1. Text embedding extraction:** We extracted text embeddings from 2 million distinct captions out of a total of 2.4 million video-caption pairs. This process completed in less than **2 hours**. * **2. Caption similarity search:** To identify captions with one-word differences, we employed the faiss library [Johnson et al. 2019] to select the 100 closest captions, avoiding the need to compare each caption against the entire set of 2 million captions. This optimization significantly reduced the search time, resulting in **2.5 hours**. * **3. Text similarity filtering:** Thanks to the precomputed text embeddings, the text similarity filtering step incurred no additional time overhead. All the text filtering processes were completed in **less than 5 minutes**, even on a large pool of 1.2 million captions. * **4. Video similarity computation:** To filter by video similarity, we extracted the middle frame from approximately 135,000 videos and computed CLIP embeddings. This step takes approximately **3 hours**. * **5. MTG-LLM model finetuning:** Finetuning for 715 examples takes **less than 10 minutes**. Note that the time required to finetune the MTG-LLM model is independent of the number of CoVR triplets we generate. * **6. Modification text generation:** This is the most time-consuming stage of the pipeline. To optimize its speed, we modified the original ``Lightning-AI/lit-llama`` implementation on github to enable batch inference. This step takes around **24 hours** to process the 1.6 million caption pairs. This analysis demonstrates the feasibility and efficiency of our approach for the current dataset. We will add this breakdown to our supplementary material.
Summary: This paper automatically constructs a new dataset called WebVid-CoVR by applying a scalable automatic dataset creation procedure that generates triplets from video-caption pairs to a large-scale WebVid2M collection, resulting in 1.6M triplets. Moreover, this paper introduces a new benchmark for composed video retrieval(CoVR) and contribute a manually annotated evaluation set, along with baseline results. The results demonstrate that training a CoVR model on WebVid-CoVR transfers well to CoIR with competitive performance. Strengths: - The automatic triplet generation pipeline is carefully designed with many phases. - Provides strong baseline results on CoVR and shows transferability to CoIR. Weaknesses: - The automatic triplet generation pipeline seems to rely only on caption similarity, as well as MTG-LLM, which may introduce noise and ignore visual similarity. Visual similarity between videos should also be taken into account. - More dataset analysis, especially about the visual part, should be provided to gain more understanding of the characteristics of WebVid-CoVR. Besides, the human check may be necessary. - Only one model (CoVR-BLIP) is evaluated on the proposed CoVR task. More baselines like frozen/finetuned multimodal transformers could be added. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please refer to Weaknesses. Flag For Ethics Review: ['Ethics review needed: Compliance (e.g., GDPR, copyright, license, terms of use)'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. Incorporating visual similarity between videos.** We agree that considering visual similarity between videos is important for the triplet generation process. However, in this work, we indeed rely only on caption similarity because, as mentioned in L114, we rely on the assumption that caption similarity implies visual similarity. Another reason we chose not to rely on visual similarity was to avoid creating an undesired bias in the dataset. For example, if we restricted *all* the video pairs in triplets to have similarity within a range, this would potentially lead the model to learn to ignore the input modification text, and only focus on the visual query. On the other hand, we note that, we already incorporate visual similarity to some extent in some cases. As mentioned in L152, when multiple paired videos match the same video caption, to avoid over-representing certain modification texts, we select the top 10 pairs with the highest visual similarity. By doing so, we effectively reduce the number of possible triplets from 2.4M to 1.6M, discarding 800k triplets. Moreover, we performed an additional experiment. We train on WebVid-CoVR triplets whose video pairs fall above a certain visual similarity threshold. The table below summarizes the results of evaluating on the zero-shot CoIR benchmarks. We observe that the performance consistently decreases with increased threshold, while also filtering out a large portion of the training data. | Visual Sim. Threshold | Percentage of data | CIRR R@1 | FashionIQ R@10 mean | |---|---|---|---| | 0.00 (None) | 100% | **38.55** | **27.68** | | 0.55 | 92% | 38.07 | 26.81 | | 0.60 | 83% | 37.69 | 25.75 | | 0.65 | 71% | 36.96 | 25.77 | | 0.70 | 55% | 35.49 | 23.72 | Table R.1: We observe worse performance on CoIR zero-shot benchmarks as we increase the visual similarity threshold in our training data. We train each model for the same number of iterations. --- --- **2. More dataset analysis.** We provide dataset statistics in Section A of the supplementary material with (i) the text/video similarity of the caption/video pairs in Figure A.1, (ii) the histogram of the number of words in the generated modification text in Figure A.2, and (iii) the distribution of number of triplets per target video in Figure A.3. As per reviewer’s request, we further provide more analysis. We plot the distribution of video categories in Figure R.1 of the rebuttal PDF. These categories are found using the WebVid metadata provided by the recent *shinonomelab/cleanvid-15m_map* on HuggingFace Datasets. We find 50% of WebVid-CoVR videos in this metadata collection. Note more than one category can be associated with a single video (e.g., Nature and Animals/Wildlife for a video of a fish in the ocean). We further point to our response to `Reviewer #xJ3a` for more analysis on the types of modifications and our human-checked test set. --- --- **3. More baselines for the CoVR task.** As suggested by the reviewer, here, we include additional baselines: * LF-CLIP (MLP): We implement the architecture of Combiner [7], referred to as late fusion by CASE [28]. In detail, we concatenate the CLIP visual and text features (from the visual query and the modification text). The combined multimodal representation is then learned on WebVid-CoVR with an MLP initialized randomly. * LF-CLIP (avg): We also implement a simpler late fusion as a baseline for the case we *do not* train on WebVid-CoVR. We average the CLIP visual and text features as our combined query representation. * LF-BLIP (MLP): Similar to [28], we implement the BLIP equivalent of the above LF-CLIP (MLP) baseline, using BLIP visual and text features instead of the CLIP ones. Note that we throw away the cross-attention layers and instead use an MLP to combine modalities. * LF-BLIP (avg): Here, we simply average the BLIP visual and text features as above. * Variants of pretrained BLIP models: We experiment with various pretrained BLIP models from [31], including the base BLIP, BLIP finetuned on Flickr30k, and BLIP finetuned on COCO (what we used in the paper). Note that, in this case, we use the existing cross-attention layers of BLIP as our multimodal combined representation and finetune them with WebVid-CoVR. The table below summarizes the results. We observe that the inclusion of LF-CLIP (avg) and LF-BLIP (avg) baselines notably enhance the initial zero-shot performance in the paper. Additionally, we observe that the BLIP model finetuned on COCO has the highest performance. | Model | Train on WebVid-CoVR | R@1 | R@5 | R@10 | R@50 | |---|---|---|---|---|---| | LF-CLIP (avg) | No | 19.91 | 39.57 | 47.33 | 67.49 | | LF-BLIP (avg) | No | 46.84 | 72.09 | 80.79 | 93.97 | | LF-CLIP (MLP) | Yes | 41.26 | 69.83 | 79.93 | 94.95 | | LF-BLIP (MLP) | Yes | 52.01 | 76.40 | 84.32 | 96.14 | | BLIP base | Yes | 51.89 | 79.76 | 87.11 | 97.70 | | BLIP ft Flickr30k | Yes | 54.11 | 80.46 | 87.44 | 97.99 | | BLIP ft COCO | Yes | **54.87** | **80.99** | **88.30** | **98.11** | Table R.2: Additional baselines on WebVid-CoVR.
Rebuttal 1: Rebuttal: We thank all four reviewers (`#k4Sb`, `#xqD8`, `#4ap2`, `#xJ3a`) for constructive feedback. It is encouraging to see that our automatic triplet generation pipeline has been well-received, particularly for its careful design and multiple phases (`#k4Sb`, `#xJ3a`), as well as its scalability (`#xqD8`). Additionally, we are pleased that our Composed Video Retrieval (CoVR) task and dataset were recognized as having potential for future research (`#k4Sb`, `#xJ3a`), and that our experiments have showcased the adaptability of our approach to the CoIR task (`#k4Sb`), as indicated by the ability to generalize in both zero-shot and finetuning scenarios (`#4ap2`). We also appreciate the kind feedback on the readability and clarity of our paper (`#4ap2`, `#xJ3a`). We address each of their comments individually and will update the paper accordingly. Pdf: /pdf/3368e328f01382d2cfec8111e354dc1a8eb2924a.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
DFRD: Data-Free Robustness Distillation for Heterogeneous Federated Learning
Accept (poster)
Summary: The overall theoretical framework of the paper can be regarded as an extension of the FedFTG theory. Building upon the server-side learning of challenging samples, it incorporates the generation of data from the previous time step by the generator to prevent catastrophic forgetting in the global model. The experimental section of the paper is substantial, and it includes comparative analyses through ablation experiments to evaluate the effectiveness of various losses. Thus, the paper makes a good contribution. Strengths: Originality: In comparison to FedFTG, the originality of the paper lies in Section 3.2, where a diverse generator is utilized by leveraging the parameter updates from the previous time step to prevent catastrophic forgetting in the model. Quality: In comparison to FedFTG, the paper conducts thorough comparative experiments and ablative experiments that elucidate the roles of various losses, indicating a certain level of quality. Clarity: The framework diagram, Figure 1, in the paper provides a clear explanation of the algorithm's flow. Weaknesses: 3.1 The impact of knowledge transfer on synthesizing different datasets lacks theoretical support and experimental evidence, as demonstrated by the decision boundary theory presented in Figure 2. There is a lack of clear relationship between synthetic data and the decision boundaries of the student and teacher models. Since the Generator satisfies Equation 2, it is capable of generating samples (blue) on the decision boundary of the teacher model. However, it is not clearly explained under what circumstances it would generate samples beyond the decision boundaries (yellow and purple) without further elaboration. 4.3 The originality of the theoretical part of the paper is relatively weak, as using parameters from the previous time step to enhance samples appears more like a heuristic approach rather than a theoretical one. The algorithm/objective is not concrete. For Eq(5), the objective is to minimization for which parameters? Technical Quality: 3 good Clarity: 3 good Questions for Authors: How the experiments simulate the model heterogenous? Suggestions : The paper lacks visual experiments on samples to enhance the credibility of the theoretical training of the generator. Parameter omega be used both for generator and Dirichlet process. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We highly appreciate Reviewer deo9 for positive comments and precious feedback on our work. We respond to specific comments below. Q1: The impact of knowledge transfer on synthesizing different datasets lacks theoretical support and experimental evidence, as demonstrated by the decision boundary theory presented in Figure 2. There is a lack of clear relationship between synthetic data and the decision boundaries of the student and teacher models. Since the Generator satisfies Equation 2, it is capable of generating samples (blue) on the decision boundary of the teacher model. However, it is not clearly explained under what circumstances it would generate samples beyond the decision boundaries (yellow and purple) without further elaboration. R1: First, we would like to state that Figure 2 is provided as a conceptual illustration to give the reader a better understanding of our interpretations. And Figure 2 was carefully designed based on our findings and observations. In the empirical study, if the generator only satisfies Equation 2, it mainly generates synthetic data with red circles, thus making it difficult to transfer knowledge from teacher to student. Therefore, an adversarial manner is needed to generate synthetic samples with black circles, i.e., Transferability of the generator. Please see lines 128-180 and lines 211-215 for detail. Also, we clearly explain the case of generating synthetic samples with yellow and purple circles, please see lines 181-187. Q2: The algorithm/objective is not concrete. For Eq(5), the objective is to minimization for which parameters? R2: For Eq. (5), the objective is to minimize the parameters of the generator. We illustrate this in line 205 and Fig. 1. Q3: How the experiments simulate the model heterogenous? Suggestions : The paper lacks visual experiments on samples to enhance the credibility of the theoretical training of the generator. Parameter omega be used both for generator and Dirichlet process. R3: In the paper, we provide a detailed description of how to simulate heterogeneous models, see lines 257-259 and Appendix D. Additionally, we visualize synthetic samples of the generator in Figures 14-18 of Appendix. --- Rebuttal Comment 1.1: Title: Thank you for your response! Comment: You have addressed some of my inquiries. However, the theoretical aspects concerning the boundaries of data generation still require further refinement. --- Reply to Comment 1.1.1: Comment: We greatly appreciate the feedback from the reviewers and the valuable insights provided. Thank you for emphasizing the theoretical aspects concerning the boundaries of data generation. While we concur on its significance, our paper primarily showcases practical techniques to enhance federated learning with the help DFKD. It's worth noting that even in exist well-known efforts [48, 49, 53, 56], including those on handling data and model heterogeneity for FL with the help DFKD [48, 49], comprehensive theoretical analysis concerning the boundaries of data generation is often absent. Given the lack of suitable theoretical frameworks, we concentrated on robust empirical validation, showcasing our method (DFRD). Our results, we believe, robustly demonstrate our method's utility. We value your feedback and intend to delve deeper into theoretical aspects in future work. We aspire for our response to address the reviewer's concerns. We remain dedicated to addressing concerns you may possess with utmost eagerness.
Summary: The paper considers learning a robust global model in heterogeneous federated learning (FL). It aims to support scenarios of both data-heterogeneous, where data distributions among clients are non-IID (identical and independently distributed), and model-heterogeneous, where the model architecture among clients is different. To this end, the authors propose a data-free knowledge distillation (DFKD) method, which utilizes a conditional generator to generate synthetic data, which simulates the local models’ training space with respect to fidelity, transferability, and diversity. Besides, the authors utilize an exponential moving average method to mitigate the catastrophic forgetting of the global model. Experiments on real-world image classification datasets are conducted to evaluate the performance of the proposed approach. Strengths: 1. The problem of heterogeneous federated learning is interesting and important. 2. A data-free knowledge distillation method with a detailed analysis of the conditional generator w.r.t. three characteristics. 3. Experiments on six real-world datasets are conducted. Weaknesses: 1. The conditional generator in the proposed approach is not well motivated and elaborated. 2. The contribution of the proposed losses w.r.t. the three characteristics is incremental. 3. The choice of hyper-parameters in the exponential moving average method is unclear. W4. Some of the experimental results are hard to comprehend. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Q1. For the data heterogeneity considered in this work, what distribution the final test dataset follows? Is it the global distribution over all clients’ local data distributions? Q2. What is the purpose of training a generator on the server for the partial training-based methods? And how is it capture the training space of various clients’ local models? More elaboration would be helpful. Q3. Regarding the conditional generator proposed in this paper, the three characteristics (fidelity, transferability, and diversity) mainly follow those defined in [48] [49]. The contribution seems to only replace the generator with a conditional generator, which looks incremental. What is the new technical contribution in the proposed method? Q4. The EMA method is proposed to mitigate the catastrophic forgetting of the global model. However, since the global model already captures the knowledge of local historical models, it should not forget the knowledge if it is useful. I wonder why this problem exists. Is it related to how the clients extract the local models from the aggregated global model? Besides, how to set the averaging parameters \lambda and \alpha? Q5. It is mentioned in the paper that the generated synthetic data should be visually distinct from the real data for privacy protection. I wonder whether there is a metric to measure this. Also, if the objective is to capture the training space of local models, would it be possible to recover the clients’ local data distributions? Q6. The proposed DFKD method is claimed to be able to address the non-IID problem; then, it would be better to compare it with solutions such as FedProx, FedNova, etc. Q7. In Table 1 and Table 2, do the two rows of each method represent the local test accuracy and global test accuracy, or the opposite? The improvement seems to be very small compared to DENSE and FedFTG, for example, less than 1% on FMNIST. Q8. In Figure 3a, why the global accuracy of DENSE is too low? E.g., only 10% accuracy. Q9. In Tables 3-5, why the accuracies of the three datasets are too low? Are they well-trained or converged? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: L1. It is unclear whether the proposed approach can handle the non-IID setting well, as one of the goals is to support data-heterogeneous. The authors may need to compare DFKD with non-IID solutions when the model architecture among clients is the same. L2. The clients may not obtain useful local models due to they hold non-IID data distributions, and the server aggregates the sub-models over the global data distribution. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We highly appreciate Reviewer i49s for positive comments and precious feedback on our work. We respond to specific comments below. Q1: For the data heterogeneity considered in this work, what distribution the final test dataset follows? Is it the global distribution over all clients’ local data distributions? R1: We detailed the setup of the test dataset in the paper, see lines 263-269. Q2: What is the purpose of training a generator on the server for the partial training-based methods? And how is it capture the training space of various clients’ local models? R2: The purpose of training a generator on the server for the partial training-based methods is to adapt the FL system to model-heterogeneous scenarios, please see lines 41-53 and 105-119. In addition, we detail how the generator captures the training space of various clients’ local models in lines 128-209. Q3: Regarding the conditional generator proposed in this paper, the three characteristics (fidelity, transferability, and diversity) mainly follow those defined in [48] [49]. The contribution seems to only replace the generator with a conditional generator, which looks incremental. What is the new technical contribution in the proposed method? R3: We acknowledge that our study of generators follows the definitions in [48, 49] in terms of fidelity, transferability, and diversity, but they do not thoroughly study the training of the generator in these three characteristics. Therefore, we propose a more effective loss objective to address their shortcomings. Please see lines 128-209 and Experiments section for detail. Moreover, our goal is to guide the generator to better capture knowledge of the local models, which cannot simply be viewed as replacing the generator with a conditional generator, since a conditional generator also is used in [48]. Q4: I wonder why catastrophic forgetting problem exists. Is it related to how the clients extract the local models from the aggregated global model? Besides, how to set the averaging parameters $\lambda$ and $\alpha$? R4: We elaborated on why catastrophic forgetting exists, see lines 211-220. Furthermore, $\lambda$ and $\alpha$ are hyperparameters that require careful fine-tuning in the experiments. We empirically study their variations on SVHN and CIFAR-10, see lines 367-383. Q5: It is mentioned in the paper that the generated synthetic data should be visually distinct from the real data for privacy protection. I wonder whether there is a metric to measure this. Also, if the objective is to capture the training space of local models, would it be possible to recover the clients’ local data distributions? R5: To our knowledge, in the current research on federated learning and data-free knowledge distillation, there are no metrics for quantifying synthetic data quality. Therefore, existing works [49, 53] use visualization of the generator's output to discriminate differences visually. Also, the generator can only partially capture the local data distribution, which is impractical to recover the clients’ local data distributions [35, 36, 53, 56]. This is because the generator does not have access to real data, it can only approximate it with the help of teacher models. Q6: The proposed DFKD method is claimed to be able to address the non-IID problem; then, it would be better to compare it with solutions such as FedProx, FedNova, etc. R6: FedProx, FedNova and so on are regularization-based methods. To ensure fair comparisons, we omit the comparison with these methods. Notably, we chose FedFTG and DENSE as baselines since they are both DFKD methods and also address the non-IID problem (please see [48] and [49] for detail). Q7: In Table 1 and Table 2, do the two rows of each method represent the local test accuracy and global test accuracy, or the opposite? The improvement seems to be very small compared to DENSE and FedFTG, for example, less than 1% on FMNIST. R7: We elaborated on what each of the two rows represents in lines 263-269. Moreover, the less than 1% improvement of our method on FMNIST compared to DENSE and FedFTG is due to the simplicity of the classification task for FMNIST. Specifically, the reason why they perform well is that FedAvg can achieve good test accuracy on FMNIST, which provides excellent initialization of the global model in each round when DENSE, FedFTG and our method are used as fine-tuning methods. However, our method significantly outperforms DENSE and FedFTG on FMNIST when they act as data-free methods rather than fine-tuning methods, see Experimental Settings and Table 1 for detail. Q8: In Figure 3a, why the global accuracy of DENSE is too low? E.g., only 10% accuracy. R8: During our empirical study, we explored the reason for the low global accuracy when DENSE is used as a data-free method rather than a fine-tuning method. We argue that DENSE is very sensitive to the initialization of the global model. Good global model initialization can greatly improve the performance of DENSE. For example, FedAvg+DENSE can achieve good accuracies, see Table 1. Q9: In Tables 3-5, why the accuracies of the three datasets are too low? Are they well-trained or converged? R9: The low accuracy rates in Tables 3-5 are related to our experimental setup. We perform the ablation study in both data and model heterogeneous settings, please see lines 299-303 and the Experimental Setting section for detail. Also, they’re well-trained. Q10: It is unclear whether the proposed approach can handle the non-IID setting well, as one of the goals is to support data-heterogeneous. The authors may need to compare DFKD with non-IID solutions when the model architecture among clients is the same. R10: Our method proposes Dynamic Weighting and Label Sampling to handle the non-IID setup, see lines 231-236 and Appendix F for detail. --- Rebuttal Comment 1.1: Comment: Dear reviewer. I kindly inquire whether our response has successfully addressed your uncertainties. We remain dedicated to addressing concerns you may possess with utmost eagerness.
Summary: This paper studies how to learn a robust global model in the data-heterogeneous and model-heterogeneous FL, by proposing a data-free knowledge distillation method. The proposed method is evaluated on six image classification datasets and outperforms compared methods. Strengths: - This paper studies data heterogeneity and model heterogeneity in FL, which are relevant and important topics. - The motivation for solving both data and model heterogeneity are well explained. - The proposed method outperforms compared methods. - The authors conduct many ablation studies. Weaknesses: - It is not clear how to calculate the term \tau_{i,y}, which is important to adjust the synthetic data. - Only intuitive toy visualizations are shown. It lacks experimental visualizations to support the mentioned “Fidelity” of the proposed method. - The experiments only validate the model heterogeneity in terms of width. - The results presentation can be improved. The table is small, and some symbols are confusing. - The experiments are mainly conducted on datasets with small image sizes (e.g., 32*32 or 64*64), lack of validation on large datasets. - The global accuracy is sensitive to different values of \beta, and the standard deviation is large. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Why do DFRD and +DFRD show different trends in Table 1? For DFRD, L.acc is higher than G.acc, but for +DFRD, L.acc is lower than G.acc. - Is this method applicable to other model heterogeneity cases? Such as different network architecture? - Why replace FMNIST in table.1 with TinyImagnet in table.2? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors discussed the limitations with potential solutions, and the broader impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We highly appreciate Reviewer XBHz for positive comments and precious feedback on our work. We respond to specific comments below. Q1: It is not clear how to calculate the term $\tau_{i,y}$, which is important to adjust the synthetic data. R1: We delve into the calculation of $\tau_{i,y}$ in the paper, see lines 231-236 and Appendix F. Q2: Only intuitive toy visualizations are shown. It lacks experimental visualizations to support the mentioned “Fidelity” of the proposed method. R2: We visualize the synthetic data of generators with different diversity constraints over SVHN, CIFAR-10, and CIFAR-100 in Figures 14-18 in the Appendix, where Figure 14 is the visualization results of our proposed method. From Figure 14, we can observe that the synthetic data generated by the generators exhibits significant differences among the distinct classes, demonstrating the high fidelity of the proposed method. Q3: The experiments are mainly conducted on datasets with small image sizes (e.g., 3232 or 6464), lack of validation on large datasets. R3: We agree with this point. However, all our experiments are conducted on one NVIDIA Tesla A100 GPU with 80Gb of memory. We have tried datasets with image sizes of 128x128 and 256x256 in our experimental device, but running DFRD and baselines on these datasets would incur extreme time costs. So, we choose datasets with small image sizes for our experimental study. Q4: Why do DFRD and +DFRD show different trends in Table 1? For DFRD, L.acc is higher than G.acc, but for +DFRD, L.acc is lower than G.acc. R4: We explain the above phenomena in the paper, please see lines 271-281. Q5: Is this method applicable to other model heterogeneity cases? Such as different network architecture? R5: Our method can be applied to other cases of model heterogeneity, such as different network architectures. Q6: Why replace FMNIST in table.1 with TinyImagnet in table.2? R6: On the one hand, we want to study the performances of our method and baselines on datasets with more difficult classification tasks. On the other hand, Tables 1 and 2 are about data heterogeneity and model heterogeneity, respectively, so there is no need to force a fixed dataset. Another important reason is the space limitation of the main paper. Other suggestions provided by the reviewer to help improve the paper will be corrected in the revised paper. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response; some of my concerns have been addressed. However, the weaknesses of point 3 and point 6 have not been responded. I would like to keep my ratings unchanged. --- Reply to Comment 1.1.1: Comment: We greatly appreciate the feedback from the reviewers and the valuable insights provided. We thank the reviewer for raising the point 3 and point 6 on the weakness section. We noted them during the rebuttal. To address the reviewer's concerns, we provide a careful response for the point 3 and 6. For point 3, we acknowledge that we only validated model heterogeneity in terms of width. However, we do mention in our paper that our primary focus lies in addressing model heterogeneity through the PT-based method [28-31], which represents a significant category of methods to tackle model heterogeneity. Please refer to lines 42, 47-52, and 105-119 for more details. Indeed, there exist prior works [60] and [61] that study model heterogeneity in terms of depth, but inclusion of intricate heuristic operations restricts their applicability. Hence, we lean towards PT-based method. Furthermore, DENSE studies model heterogeneity scenarios wherein the architectures of local models are entirely dissimilar, and can only transfer knowledge from the local models to the randomly initialized global model, resulting in a degradation of the global model's performance, as illustrated in Table 1. For point 6, we thank for pointing out the details of Fig 4. From Fig. 4, we can see that DFRD maintains stable test performance among all selections of $\beta_{tran}$ and $\beta_{div}$ over SVHN. Meanwhile, $G.acc$ fluctuates slightly with the increases of $\beta_{tran}$ and $\beta_{div}$ on CIFAR-10. The above results indicate that DFRD is not sensitive to choices of $\beta_{tran}$ and $\beta_{div}$ over a wide range. Also, the large standard deviation is not due to $\beta_{tran}$ and $\beta_{div}$, but stems from our experimental setup. Specifically, to ensure reliability, we report the average for each experiment over $3$ different random seeds. Different seeds imply different heterogeneity (including data and model) and model initialization, so our experimental setups vary greatly among seeds, resulting in large standard deviations. Moreover, in our extensive comparison experiments, DFRD achieves the smallest standard deviation compared to baselines in most cases, which robustly demonstrates our method's utility. We aspire for our response to address the reviewer's concerns. We remain dedicated to addressing concerns you may possess with utmost eagerness.
Summary: In the presented paper, the authors lay out a strategy for fine-tuning a global model, specifically within the Heterogeneous Federated Learning setting. To transfer knowledge from the individual local models (identified as 'teachers') to the global model (or 'student'), the authors construct and train a generative model that is capable of producing pseudo samples, inspired from the Data-Free Knowledge Distillation literature. Furthermore, the authors suggest preserving an exponentially moving copy of this generative model. This is specifically designed to provide pseudo-samples from prior communication rounds. The overarching goal of this method is to protect the global model from catastrophic forgetting - a substantial problem in machine learning where a model, after learning new information, forgets the old. By accessing these pseudo-samples from previous rounds, the global model can effectively reinforce its learning and maintain a more comprehensive understanding over the communication rounds. Lastly, the authors introduce a system that employs dynamic weighting and label sampling. This is proposed to enhance the precision in extracting knowledge from local models. By implementing this technique, it is intended that the knowledge transfer process will be more accurate and effective. Strengths: 1. In this manuscript, the authors introduce an elegant yet straightforward technique to avert catastrophic forgetting, an issue that often arises in machine learning models, by maintaining a EMA generator. Remarkably, the method appears to maintain a consistent spatial footprint, equivalent to the dimensions of the generator models, throughout the various communication cycles. This ingenious mechanism facilitates the generation of pseudo samples from preceding rounds, thereby extending its usefulness and practicality in real-world applications. 2. Moreover, the depth of empirical examination carried out in this paper is commendable. The authors have meticulously analyzed the performance of their technique, employing a thorough sensitivity analysis and ablation studies to evaluate the impact of the individual components of their methodology. This level of rigor and detail, often overlooked, attests to the quality of the research conducted and significantly boosts the credibility of their findings. Weaknesses: 1. The manuscript offers a promising exploration into mitigating catastrophic forgetting in the context of Federated Heterogeneous Learning. However, as a reviewer, I find that the examination of catastrophic forgetting and its severity within this specific context could have been more in-depth. Works such as those by Binci et al. [1,2] and Patel et al. [3], which tackle catastrophic forgetting in adversarial settings, provide comprehensive empirical evidence via monitoring student learning curves. Such an approach would have deepened the current study's investigation, allowing readers to better appreciate its significance. 2. The current research draws substantially from existing methodologies, including those used in FedFTG [4] and DENSE [5], particularly adopting the data-free knowledge transfer framework from Do et al. [6]. Consequently, it becomes challenging to discern the novel contribution of this paper. Greater emphasis on aspects like Dynamic Weighting and Label Sampling, currently found in the appendix, would have added more substance to the primary narrative of the manuscript. Additionally, a thorough exploration of the obstacles faced when adapting the adversarial data-free knowledge transfer framework [6] to a Heterogeneous FL setting would have enriched the study. Minor: 1. In the related works section, the absence of adversarial DFKD methodologies, such as those in [1,2,3], which also aim to prevent catastrophic forgetting, is a notable omission. I suggest that the authors provide a clear rationale for their preference for the DFKD framework in [6] over the others ([3] could be excluded due its recentness), clarifying this would enhance the comprehensibility of the manuscript. References: [1] Kuluhan Binci, Nam Trung Pham, Tulika Mitra, and Karianto Leman. "Preventing catastrophic forgetting and distribution mismatch in knowledge distillation via synthetic data." In WACV, 2022. [2] Kuluhan Binci, Shivam Aggarwal, Nam Trung Pham, Karianto Leman, and Tulika Mitra. "Robust and resource-efficient data-free knowledge distillation by generative pseudo replay." In AAAI, 2022. [3] Gaurav Patel, Konda Reddy Mopuri, and Qiang Qiu. "Learning to Retain while Acquiring: Combating Distribution-Shift in Adversarial Data-Free Knowledge Distillation." In CVPR, 2023. [4] Lin Zhang, Li Shen, Liang Ding, Dacheng Tao, and Ling-Yu Duan. "Fine-tuning global model via data-free knowledge distillation for non-iid federated learning." In CVPR, 2022. [5] Jie Zhang, Chen Chen, Bo Li, Lingjuan Lyu, Shuang Wu, Shouhong Ding, Chunhua Shen, and Chao Wu. "Dense: Data-free one-shot federated learning." In NeurIPS, 2022. [6] Kien Do, Thai Hung Le, Dung Nguyen, Dang Nguyen, Haripriya Harikumar, Truyen Tran, Santu Rana, and Svetha Venkatesh. "Momentum Adversarial Distillation: Handling Large Distribution Shifts in Data-Free Knowledge Distillation." In NeurIPS, 2022. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. I am interested in gaining a deeper understanding of the process behind the observations and interpretations, specifically those pertaining to transferability, decision boundaries, and the positioning of synthetic data-points (L165-180). Could the authors elaborate on the methodology or rationale that led to these interpretations? 2. Furthermore, the visualization provided in Figure 2 has caught my attention. I would appreciate it if the authors could elucidate how this figure was constructed. Does this visual representation arise from actual data obtained through a small-scale experiment, or is it a conceptual illustration reflecting your interpretations? 3. Regarding equation 5, I noticed that the KL divergence loss is applied directly to the logits (pre-softmax) features, as opposed to the prediction distribution resulting from the application of softmax to the logits. Could the authors provide a comprehensive explanation for this choice? Does this approach hold any specific advantages or implications for the overall results of your study? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors have thoroughly listed out the limitations and broader impacts in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We highly appreciate Reviewer 1PRp for positive comments and precious feedback on our work. We respond to specific comments below. Q1: The manuscript offers a promising exploration into mitigating catastrophic forgetting in the context of Federated Heterogeneous Learning. However, as a reviewer, I find that the examination of catastrophic forgetting and its severity within this specific context could have been more in-depth. Works such as those by Binci et al. [1,2] and Patel et al. [3], which tackle catastrophic forgetting in adversarial settings, provide comprehensive empirical evidence via monitoring student learning curves. Such an approach would have deepened the current study's investigation, allowing readers to better appreciate its significance. R1: We thank the reviewer's constructive suggestion. We will refer to [1], [2] and [3] as references, thus deepening the investigation that addresses the problem of catastrophic forgetting in adversarial settings. Also, compare to [1], [2] and [3], [6] is more memory efficient, adapts better to the student update, and is more stable for a continuous stream of synthetic samples. That's why we prefer [6]. We will update them in the revised paper, thereby enhancing the comprehensibility of the manuscript. Q2: I am interested in gaining a deeper understanding of the process behind the observations and interpretations, specifically those pertaining to transferability, decision boundaries, and the positioning of synthetic data-points (L165-180). Could the authors elaborate on the methodology or rationale that led to these interpretations? R2: As we proceeded with our work, we first delved into existing related works on transferability including references [4], [5], and [6], among others. Then, during the empirical study, we found that the generator produces two types of synthetic data with yellow and purple circles in Fig. 2 (c), which can mislead the generator. For example, the conditional generator $G(\cdot)$ takes label information (e.g., $y=1$) as one of the inputs to generate synthetic data s. However, the ensemble model’s inferred output based on s results in label 2, i.e., $y=2$. This inconsistency degrades the quality of synthetic data generated by $G(\cdot)$, thereby negatively affecting the global model's performance, as shown in Table 3. The aforementioned process and principles elucidate how we led to these interpretations. Q3: Furthermore, the visualization provided in Figure 2 has caught my attention. I would appreciate it if the authors could elucidate how this figure was constructed. Does this visual representation arise from actual data obtained through a small-scale experiment, or is it a conceptual illustration reflecting your interpretations? R3: Figure 2 is a conceptual illustration to give the reader a better understanding of our interpretations. And Figure 2 was carefully designed based on our findings and observations during our empirical study. Q4: Regarding equation 5, I noticed that the KL divergence loss is applied directly to the logits (pre-softmax) features, as opposed to the prediction distribution resulting from the application of softmax to the logits. Could the authors provide a comprehensive explanation for this choice? Does this approach hold any specific advantages or implications for the overall results of your study? R4: In our work, KL divergence loss is applied to the prediction distribution resulting from the application of softmax to the logits, as opposed to the logits (pre-softmax) features. Both softmax and log-softmax operations are considered within the KL divergence (please see our code). We will clear them further in the revised paper for ease of understanding. --- Rebuttal Comment 1.1: Title: Thanks for the response! Comment: Thank you to the authors for their comprehensive response to the review(s). They have addressed most of my concerns. I would like to maintain my initial rating as I stand by the second point on the weakness section. Nonetheless, I acknowledge the non-triviality of the problems faced and tackled by the authors. --- Reply to Comment 1.1.1: Comment: We greatly appreciate the feedback from the reviewers and the valuable insights provided. We thank the reviewer for raising the second point on the weakness section. We noted it during the rebuttal and have made adjustments and additions accordingly. On the one hand, we adapted the novel contributions of our work, placing greater emphasis on Dynamic Weighting and Label Sampling in the revised main paper. Also, we delved into the obstacles faced by the data-free adversarial knowledge transfer framework [6] in the heterogeneous FL setting. Specifically, while DFRD with [6] achieves excellent performance compared to baselines, we contend that scenarios characterized by high data heterogeneity continue to pose an obstacle to the performance improvement of DFRD with [6] (refer to Table 1). This is because it only dominates the contributions among different clients based on the sample proportion (i.e., $\tau_{i, y}$), which constrains the low-end clients (i.e., those with fewer datasets) who would otherwise make unique contributions to model training. As demonstrated by existing efforts (e.g., [a] [b] and [c]), the more rational control of contributions among different clients is a systematic and formidable challenge, one that we plan to address in future work. We have updated the aforementioned content in the revised paper. We aspire for our response to address the reviewer's concerns. We remain dedicated to addressing concerns you may possess with utmost eagerness. [a] Zhang, Jie, et al. "Federated learning with label distribution skew via logits calibration." International Conference on Machine Learning. PMLR, 2022. [b] Luo, Mi, et al. "No fear of heterogeneity: Classifier calibration for federated learning with non-iid data." Advances in Neural Information Processing Systems 34 (2021): 5972-5984. [c] Shen, Zebang, et al. "An agnostic approach to federated learning with class imbalance." International Conference on Learning Representations. 2021.
Rebuttal 1: Rebuttal: We thank all reviewers for their thoughtful, constructive, and positive review of our manuscript. We are encouraged to hear that the reviewers found the DFRD method we present to be interesting and practical (Reviewers i49s, 1PRp), and thoroughly-evaluated (Reviewers Yxnp, 1PRp, XBHz, i49s, deo9). Meanwhile, they view our methodology as novel (Reviewers Yxnp, 1PRp) and our manuscript as well-written (Reviewers Yxnp, 1PRp). In response to the feedback, we provide detailed responses to address each reviewer's concerns below.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a new method called DFRD for robust and privacy-constrained Federated Learning (FL) in the presence of data and model heterogeneity. DFRD uses a conditional generator to approximate the training space of local models and a data-free knowledge distillation technique to overcome catastrophic forgetting and accurately extract knowledge from local models. The experiments show that DFRD achieves performance gains compared to baselines. Strengths: 1. The writing is easy to follow 2. The overall idea seems novel 3. The authors justify the necessities of loss functions and validate their effectiveness in the experiment. Weaknesses: 1. This work is well-engineered, comprising multiple components and hyperparameters. Although stability testing was performed on the same dataset, the optimal hyperparameters for different datasets appear to be unstable (see Fig 5). 2. Although the authors propose loss functions aiming to improve fidelity and diversity, the synthetic images provided in the appendix are still far from achieving these goals. It is also questionable whether such characteristics are really necessary for downstream KD and ensembling. 3. The EMA strategy to avoid forgetting is not new in continuous learning, so the novelty of this technique is limited in this paper. 4. Based on the presented learning curves, the models do not seem to converge after 100 communication rounds for both the proposed and benchmark methods. This undermines the persuasiveness of the results reported in the tables. 5. The overall performance is very low, e.g., less than 20% accuracy for 10-class classification. I have concerns about the practicality of the method given its complexity and unsatisfactory performance in tackling the challenging non-iid setting. 6. I appreciate the authors' efforts to conduct numerous experiments, but how the settings overlap between experiments is not clear. It would be helpful if the authors could provide a table to show the connections. For example, I found it difficult to relate the values in the figures to the numbers in the tables. Ideally, the same experimental setting would result in the same number. Minor comments: The pseudocode provided in the appendix is not well organized. There seems to be a convoluted operation between server and client. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. As this method already uses EMA on global model (G) to get \tilde{G}, why does it need another step to find KL divergence on both G and \tilde{G}? 2 How the w, \rho, and \delta play roles in data heterogeneity? 3 There is a line of work on KD for FL aggregation. Could the author justify why " FedFTG [48] and DENSE [49], which are the most relevant methods to our work?" comparing the other related work mentioned in the manuscript (such as [27], [40], [47]) and Wu et al. listed below: Wu, C., Wu, F., Lyu, L., Huang, Y., & Xie, X. (2022). Communication-efficient federated learning via knowledge distillation. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are discussed in appendix Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We highly appreciate Reviewer Yxnp for positive comments and precious feedback on our work. We respond to specific comments below. Q1: This work is well-engineered, comprising multiple components and hyperparameters. Although stability testing was performed on the same dataset, the optimal hyperparameters for different datasets appear to be unstable (see Fig 5). R1: Thanks for pointing out the details of Fig 5. We argue that there is no correlation between optimal hyperparameters for different datasets. Intuitively, different datasets correspond to different learning tasks, so it is difficult for any method (including DFRD) to ensure consistent optimal hyperparameters for varying learning tasks during practical experiments. Q2: Although the authors propose loss functions aiming to improve fidelity and diversity, the synthetic images provided in the appendix are still far from achieving these goals. It is also questionable whether such characteristics are really necessary for downstream KD and ensembling. R2: These images are necessary for downstream KD and ensembling, evidenced by Tables 1 and 2, that is, these images significantly augment the global model. Also, we present synthetic images under different diversity constraints (mul, add, cat, n-cat, none) in the Appendix (see Fig.14 - 18), where mul is proposed by us (see Fig. 14). The synthetic images exhibited in Fig. 14 clearly outperform other synthetic images (see Figs. 15-18) in terms of fidelity and diversity. Q3: The EMA strategy to avoid forgetting is not new in continuous learning, so the novelty of this technique is limited in this paper. R3: We agree that the EMA strategy is not new in continuous learning, but our innovation is that we are the first effort to apply the EMA strategy to heterogeneous FL with data-free knowledge distillation. Q4: The overall performance is very low, e.g., less than 20% accuracy for 10-class classification. I have concerns about the practicality of the method given its complexity and unsatisfactory performance in tackling the challenging non-iid setting. R4: In our work, we set up extensive heterogeneity settings, including data heterogeneity and model heterogeneity. In terms of data heterogeneity (i.e., non-iid settings), the accuracies of FedAvg and FedAvg+DFRD (+DENSE and +FedFTG) are much greater than 20% for different non-iid settings (including the challenging non-iid setting, i.e., $\omega=0.01$) under model homogeneity, see Table 1. In Table 2, we specifically investigate challenging scenarios characterized by both data and model heterogeneity, resulting in a considerable number of accuracies falling below the 20% threshold, especially on Tiny-ImageNet and FOOD101. Notably, the accuracy of DFRD is greater than 20% on SVHN and CIFAR-10 in most cases. Therefore, we are confident in the practicality of our method. Q5: I appreciate the authors' efforts to conduct numerous experiments, but how the settings overlap between experiments is not clear. It would be helpful if the authors could provide a table to show the connections. For example, I found it difficult to relate the values in the figures to the numbers in the tables. Ideally, the same experimental setting would result in the same number. R5: Due to the space limitation of the main paper, we detail experimental settings in Appendix E. We also agree with the reviewer's comments and will update them in the revised paper. Q6: The pseudocode provided in the appendix is not well organized. There seems to be a convoluted operation between server and client. R6: There are no convoluted operations between server and clients. For ease of understanding, we will add a description of the pseudocode in the revised paper. Q7: As this method already uses EMA on global model ($G$) to get $\tilde{G}$, why does it need another step to find KL divergence on both $G$ and $\tilde{G}$? R7: In our paper, $\tilde{G}$ stores previous knowledge learned from the local models and is used as a complement to $G$ in DFRD. $\tilde{G}$ does not contain information about $G$ in the latest communication round. See lines 19 to 22 in Algorithm 1 for detail. Therefore, an additional step is required to compute the KL divergence for both $G$ and $\tilde{G}$. Q8: How the $\omega$, $\rho$, and $\sigma$ play roles in data heterogeneity? R8: In our paper, Dirichlet process $Dir(\omega)$ is utilized to assign training data to each client, which is frequently used in existing federated learning works [34,37, 38]. Here, $\omega$ is the concentration parameter of $Dir(\omega)$ and smaller $\omega$ corresponds to stronger data heterogeneity. We provide the visualizations of the data partitions for the six datasets at varying $\omega$ values, which can be found in Fig.6 and 7 in Appendix. In addition, $\rho$ and $\sigma$ are parameters related to model heterogeneity rather than data heterogeneity, see Appendix D for detail. Q9: There is a line of work on KD for FL aggregation. Could the author justify why " FedFTG [48] and DENSE [49], which are the most relevant methods to our work?" comparing the other related work mentioned in the manuscript (such as [27], [40], [47]) and Wu et al. listed below: Wu, C., Wu, F., Lyu, L., Huang, Y., & Xie, X. (2022). Communication-efficient federated learning via knowledge distillation. R9: To ensure fair comparisons, we neglect the comparison with methods that require to download auxiliary models or datasets, such as RHFL[27], FedDF[40] and FedGen [47] and the mentioned paper. Also, we'll add the mentioned paper as one of our references. --- Rebuttal Comment 1.1: Comment: Dear reviewer. I kindly inquire whether our response has successfully addressed your uncertainties. We remain dedicated to addressing concerns you may possess with utmost eagerness.
null
null
null
null
null
null
Post Hoc Explanations of Language Models Can Improve Language Models
Accept (poster)
Summary: This paper proposes an alternative prompting framework to Chain-of-thoughts, by using post-hoc explanations from proxy smaller LLMs. Specifically, given a query question and few-shot examples, the method firstly uses a proxy model to get post-hoc explanations on key input words. Then the key input words are combined with few-shot examples to form in-context prompt. The method is one of the first attempt to utilise post-hoc explanations to boost in-context learning performance. It doesn't require human annotation on reasoning intermediate steps, while outperforming CoT on several challenging tasks from the Big-Bench-Hard benchmark. Strengths: 1. Research on post-hoc explanation have mostly been explored to better understand model prediction. This work potentially opens up a new area of application, as it uses post-hoc explanation to boost in-context learning performance 2. The presentation is clear and easy-to-follow. The author provides informative ablations and comparisons to access the method from multiple aspects, which leads to an optimal settings with sound empirical evidences 3. The improvement with finetuned proxy model, is significant over CoT: 10-25% gain as highlighted in abstract. Weaknesses: 1. Although the improvement over CoT is significant and consistent with fully finetuned proxy model, the improvement becomes less in scale and less consistent with non-finetuned proxy model as shown in Table 2. In fact, if we take a closer look at table 1 & 2, we can see for GPT-3.5 AMPLIFY with non-finetuned proxy model, is worse than CoT on Formal Fallacies(48.3 / 54.6), CommonsenseQA(71.9 / 75.2), Coin Flip (55.4 / 61.0). Besides, such behaviour is not discussed in "Impact of Proxy Model Selection on LLM Performance" paragraph 2. The setting with finetuned proxy model, requires training data for the target task. This breaks the typical assumption of in-context learning where only a handful of annotated examples are available. Besides, it is desirable to show the performance of finetuned proxy model on these tasks (with E=0/10/200), to better assess the benefit of AMPLIFY Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. When finetuning the proxy model on target task, how many data points did you use? Can we have more information / analysis here with regard to the number of data points used for finetuning vs the number of few-shot examples used to prompt final LLM. It will be good if we can have a fair setting to compare with CoT / AO, where the proxy model is finetuned with the same set of data points as we use to prompt the final LLM. 2. Do you think there are tasks that CoT will work better in principle, like complex / multi-hop reasoning tasks? If so, do you think it's beneficial to combine AMPLIFY with CoT in those cases? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The paper has discussed the limitations properly Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their thoughtful comments, and for recognizing the novelty of our work. In the subsequent sections, we will address the specific questions and comments raised by the reviewer. Additionally, we are committed to incorporating all of our responses and discussions into the final version of the paper. >“Although the improvement over CoT is significant and consistent with fully finetuned proxy model..” As we experimented with the fine-tuning of proxy models, it's important to note that this step can be eliminated by using a more capable pretrained proxy model, yet still achieving performance gains over baselines. The table below shows that the performance of LLM surpasses the baseline when we use gpt2-medium instead of gpt2-small, **without any fine-tuning**. This demonstrates that fine-tuning is not mandatory. We would also like to highlight that our work represents the first pipeline that utilizes post hoc explanation to enhance LLM performance. We have aimed to make it modular to ensure that modifications for improving model performance are easy to implement. | Experiment Tasks [LLM : GPT3.5] | AMPLIFY (gpt2-small) | AMPLIFY(gpt2-medium) | CoT | | ----------------- | ---------- | ----------- | ----- | | Snarks| 88.8| **91.6**| 69.4 | | CausalJ| 71.0| **71.0** | 63.1 | | RuinN | 65.1| **70.7** | 62.9 | | Formal Fallacies| 48.3| **56.0**| 54.6 | | SalientT | 57.7| **60.8**| 54.7 | | CSQA| 71.9 | **75.5** | 75.2 | | Coin Flip (OOD) | 55.4| 59.6| **61.0** | Our motivation to show results for fine-tuning models in the paper is to demonstrate the improvement in LLM performance when the proxy model is further fine tuned. >“The setting with finetuned proxy model, requires training data…” Thank you for your comment. Appendix “A.1 Proxy Model Task Performance” shows the proxy model performance with E=0 and E=200. Even with E=200 finetuning, the proxy models' performance is far below compared to performance achieved by AMPLIFY. There are two reasons behind this low performance of proxy models : (1) these tasks are extremely challenging for such small models [1], (2) The train samples used for fine-tuning weren’t enough to achieve significant performance improvements. For instance, we used 87, 91, 215, 479 samples to fine-tune for Snarks, Causal Judgment, Salient Translation, and Ruin Names. Furthermore, we would like to emphasize that performance measurements with fine-tuning were provided for the sole purpose of understanding changes in performance if the proxy model is fine-tuned with training samples. As demonstrated in the table responding to the previous comment, the fine-tuning step can be entirely eliminated with the use of a slightly more effective model. We will include this detail in the final draft for more clarity. >“Can we have more information / analysis here with regard to the number of data points used for finetuning vs the number of few-shot examples used to prompt final LLM. In order to create the validation set, we randomly selected 40% of the total samples from the train set for each dataset. We used s=10 (number of few-shot examples) and k=5 (# top-k tokens) as the default hyper-parameters for results shown in all the tables. We provide results on other hyper-parameter settings in Table 6 of Appendix A.3. We will include this detail in the final draft. >”It will be good if we can have a fair setting ..” In the table below, we show performance gains when the proxy model (gpt2-small) is fine-tuned with the same samples chosen for the final LLM. We observe the performance to be very similar to what we obtained after fine-tuning with all train data for E=10. | Experiment Tasks [LLM : GPT3.5] | AO | CoT | AMPLIFY (gpt2-small, E = 50) | AMPLIFY (reported in Table 1) | |------------------|------|------|----------|-----------| | CausalJ | 57.8 | 63.1 | 73.6 | **76.3** | | RuinN | 69.6 | 62.9 | 68.5 | **77.5** | | SalientT | 43.2 | 54.7 | 55.2 | **60.8** | | CSQA | 75.7 | 75.2 | 76.0 | **77.9** | | Coin Flip(OOD) | 52.9 | 61.0 | 61.7 | **65.3** | >“Do you think there are tasks that CoT will work better in principle…” Thank you for your comment. For our experiments, we focused on tasks that require complex language understanding [2], which are also cases where post hoc explanations have been found to be useful in capturing important features, hence providing useful explanations [3]. However, we also experimented with GSM8k (math problem dataset) used in CoT and observed that AMPLIFY outperforms AO but performs worse than CoT. shown in table below. | Experiment Tasks | AO | CoT | AMPLIFY (proxy model : gpt2-small) | | --- | --- | --- | --- | | GSM8k | 22.7 | **43.5** | 27.4 | While we outperform the standard few-shot approach, the underperformance of AMPLIFY when compared to CoT is expected because solving math problems requires multi-step reasoning, a complex function which is beyond what post hoc explanations are designed to explain. We further wish to clarify that we do not present AMPLIFY as a replacement for CoT, but rather as a superior alternative for tasks requiring complex language understanding; these are tasks for which obtaining chains-of-thought through human annotations is exceptionally challenging [2]. The goal of our approach is to generate explanations without any dependence on human annotations. However, a combination of CoT and AMPLIFY could lead to further gains. This will require more analysis and could be an interesting area for future research. **References** [1]​​ Srivastava, Aarohi, et al. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models.(2022) [2] Suzgun, Mirac, et al. Challenging big-bench tasks and whether chain-of-thought can solve them.(2022) [3] Madsen, Andreas, et. al. Post-hoc interpretability for neural nlp: A survey. (2021) --- Rebuttal Comment 1.1: Title: Response to Author Comment: Thank you for the thoughtful and detailed response! I agree that AMPLIFY will work better on tasks requiring "complex language understanding" and it's expected that for multi-step reasoning tasks like GSM8K, AMPLIFY will be less effective to CoT. It's a good practice that the authors share those results, which help us readers to understand better of the applicability of AMPLIFY and have a clearer understanding of its contribution. With the author's response, I perceive AMPLIFY as one of the first attempt on using post-hoc explanation for improving generation performance on tasks requiring "complex language understanding". It serves as a nice alternative prompting method to CoT as it doesn't require any manual design of CoT prompts. I will confidently keep my recommendation score unchanged based on this perception. --- Reply to Comment 1.1.1: Comment: Thank you very much for reviewing our response! We will include the additional results in the final draft.
Summary: This paper demonstrates how post hoc explanations through a small LM could assist the performance of LLMs. The process is divided into 4 steps: Proxy Model Selection, Few-shot sample selection, compute explanations, and formatting prompts for LLMs. This technique automatically generates few-shot demonstrations, reducing human-annotation for few-shot in-context learning. Strengths: - The method is sound and novel. - The paper contains extensive ablation experiments. - The improvement for some tasks is dramatic. (e.g. snarks) Weaknesses: - This paper does not compare the performance with other baselines such as Auto-CoT that automatically generate demonstrations for in-context learning. Zhang et al (2022) Automatic Chain of Thought Prompting in Large Language Models - Although it is stated that the experimentation is specifically carried out on datasets aimed at evaluating complex linguistic understanding concepts, it omits datasets such as DisambiguationQA, Hyperbaton, and Word Soriting of Big-Bench-Hard benchmark. - When GPT-2 is not fine-tuned (E=0) in Table 2, AMPLIFY outperforms both AO and CoT on 3 out of 7 datasets for both GPT-3 and GPT-3.5, indicating that fine-tuning process is crucial. Also, the performance gets worse compared to AO for datasets such as formal fallacies, commonsenseQA, and ruin names for GPT-3.5. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - When fine-tuning GPT-2, what training datasets are used? As far as I know, tasks of Big-Bench-Hard do not contain training instances. - Do you expect that AMPLIFY would be still effective for models with less than 100B parameters? Also, do you think the effect of AMPLIFY could generalize to open-source LLMs? - What is the default s value (number of shots) for GPT-3, GPT-3.5 result of Table 1? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors adequately addressed the limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable thoughts and suggestions. We appreciate the reviewer's acknowledgement of the novelty and analysis provided in our work. In the subsequent sections, we will address the specific questions and comments raised by the reviewer. Additionally, we are committed to incorporating all of our responses and discussions into the final version of the paper. >“This paper does not compare the performance with other baselines..” Thank you for your comment. We experimented with the specific baseline suggested in the comment and found that our method performs better than Auto-CoT, as shown in the table below. We believe the reason behind the under-performance of Auto-CoT (and similar self-reasoning based methods) is that the reasoning generated by LLMs is not reliable and has a higher chance of being incorrect, which has also been shown in several research works [1, 2, 3]. | Experiment Tasks [LLM : GPT3.5] | AO | CoT | Auto-CoT | AMPLIFY | |------------------|------|------|----------|-----------| | Causal Judgment | 57.8 | 63.1 | 63.1 | **76.3** | | Ruin Names | 69.6 | 62.9 | 67.4 | **77.5** | | Salient Translation | 43.2 | 54.7 | 53.2 | **60.8** | | CommonsenseQA | 75.7 | 75.2 | 74.6 | **77.9** | | Coin Flip(OOD) | 52.9 | 61.0 | 62.9 | **65.3** | >“Although it is stated that the experimentation is specifically carried out on datasets aimed at ..” Thank you for your comment. We conducted experiments on the suggested datasets and observed similar gains achieved by AMPLIFY. However, we observed only minimal improvement in the task of word sorting. This is because word sorting requires an understanding of lexical properties over linguistic semantics. | Experiment Tasks [LLM : GPT3.5] | Random | SOTA | Avg. | Max | AO | CoT | AMPLIFY | |--------------------|--------|-------|-------|-------|-------|-------|-------| | Disambiguation QA | 33.2 | 51.6 | 66.6 | 93.3 | 66.6 | 70.5 | **74.5** | | Word Sorting | 0 | 33.1 | 62.6 | 100 | 37.8 | 43.1 | **43.6** | | Hyperbaton | 50.0 | 67.1 | 74.7 | 100 | 68.5 | 77.4 | **79.7** | >”When GPT-2 is not fine-tuned (E=0) in Table 2, AMPLIFY outperforms…” While we experimented with the fine-tuning of proxy models, it's important to note that this step can be eliminated by using a more capable pretrained proxy model, and still achieve performance gains over baselines. The table below shows that the performance of LLM surpasses the baseline when we use gpt2-medium instead of gpt2-small, without any fine-tuning. This demonstrates that fine-tuning is not mandatory. | Experiment Tasks [LLM : GPT3.5] | AMPLIFY (gpt2-small) | AMPLIFY(gpt2-medium) | CoT | | ----------------- | ---------- | ----------- | ----- | | Snarks | 88.8 | **91.6** | 69.4 | | Causal Judgment | 71.0 | **71.0** | 63.1 | | Ruin Names | 65.1 | **70.7** | 62.9 | | Formal Fallacies | 48.3 | **56.0** | 54.6 | | Salient Translation | 57.7 | **60.8** | 54.7 | | CommonsenseQA | 71.9 | **75.5** | 75.2 | | Coin Flip (OOD) | 55.4 | 59.6 | **61.0** | Our motivation to show results for fine-tuning models in the paper is to demonstrate the improvement in LLM performance when the proxy model is further fine tuned. >“When fine-tuning GPT-2, what training datasets are used?..” Generally, the Big-Bench-Hard datasets have been used in both settings : (1) evaluation only and (2) train/test split versions. For our experiments, we used the version that provides a train-test split, which is publicly hosted on Hugging Face. >“Do you expect that AMPLIFY would be still effective for models…” In our work, we analyzed models with a parameter size of >100Bn for two reasons: (1) These are models that have demonstrated emergent abilities in performing extremely challenging tasks [4], and (2) to provide a fair comparison against CoT, which performs poorly on LLMs with a parameter size smaller than 100Bn. We ran a quick experiment with Alpaca7B, an open-source LLM, on the Snarks dataset and found that AMPLIFY performs at 44.4, compared to 38.8 (AO) and 41.6 (CoT). This suggests that AMPLIFY may also be used for smaller models (<100Bn), but a deeper analysis is required, which could be a potential area for future work. >“What is the default s value (number of shots) for .." We used s=10 and k=5 as the default hyper-parameters for results shown in all the tables. We provide results on other hyper-parameter settings in Table 6 of Appendix A.3. **References** [1] Turpin, M., Michael, J., Perez, E., & Bowman, S. R. (2023). Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting. arXiv preprint arXiv:2305.04388. [2] Lanham, Tamera, et al. "Measuring Faithfulness in Chain-of-Thought Reasoning." arXiv preprint arXiv:2307.13702 (2023). [3] Zhao, Ruochen, et al. "Verify-and-edit: A knowledge-enhanced chain-of-thought framework." arXiv preprint arXiv:2305.03268 (2023). [4] Wei, Jason, et al. "Emergent abilities of large language models." arXiv preprint arXiv:2206.07682 (2022). --- Rebuttal Comment 1.1: Comment: Thank you for your response and sharing of additional results. I think the additional results would give more valuable insights to the reader. I will keep my score as "weak accept". --- Reply to Comment 1.1.1: Comment: Thank you very much for reviewing our response! We will include the additional results in the final draft.
Summary: The paper presents AMPLIFY, an approach that uses post-hoc explanations from a proxy model to improve the prompting performance of large language models. For a given dataset, the approach assumes access to a set of labeled validation data that is used for crafting a prompt. First, the approach selects k examples that are misclassified by LLMs and exhibit high misclassification confidence scores. Next, post-hoc explanation techniques are used to find the most important input features of these selected examples, which are later used to construct template-based rationales. The rationales are used in the final prompts following the chain-of-thought paradigm. The paper conducts experiments on 7 datasets from Big-Bench-Hard. Experimental results suggest good improvements over the few-shot answer-only prompting (AO) baseline and the few-shot CoT baseline. The paper also includes analyses suggesting the importance of example selection strategies as well as the impacts of using different proxy models and explanation techniques. Strengths: The paper studies an interesting problem that focuses on using post-hoc explanations for improving LLMs’ performance, which is relatively new. The experiments cover 7 datasets and show performance improvements compared to few-shot baselines. The paper provides some useful analyses, especially the analysis of the example selection strategies. The paper is well-written and easy to follow. Weaknesses: 1: The major comparison is unfair in some way The proposed approach uses a validation set (the size of the validation set is also not mentioned in the paper, nor in the appendix) and selects examples from the validation set. By contrast, standard prompting and chain-of-thought prompting typically only use few-shot examples. So part of the improvements might also be attributed to the use of more data in addition to the prompting method proposed in this paper (see weakness 2). In addition, the paper does not compare against approaches that can also use the validation set instead of just a few-shot examples. 2: However, the experiments do not clearly suggest the effectiveness of using post-hoc explanations The proposed approach uses the LLM itself + smaller proxy models to find more informative examples to be used in the prompts (the selection strategy). This active selection strategy does not rely on using post-hoc explanations and can be applied to answer-only prompting or chain-of-thought prompting as well. It would be good to provide AO and CoT performance using the selected examples as well to give a better understanding of the effectiveness of using better examples versus the effectiveness of using post-hoc explanations in prompts. The paper does provide analysis of the impacts of using different selection strategies. As shown in Table 3, using random examples, the performance of AMPLIFY is only 59.3 on GPT3 and 62.0 on GPT-3.5; at the same time, according to Table 1, CoT performance is 58.0 on GPT3 and 62.9 on GPT-3.5. IIUC, this suggests CoT and AMPLIFY is comparable; and selecting better examples contribute the majority of the performance improvements. While it seems that most of the improvements come from the selection strategy, the paper does not compare against other approaches that involve active annotating examples for in-context learning (e.g., Su et al. (2022); Diao et al. (2023)). Also, this proposed selective annotation strategy does not distinguish itself from existing work, which uses similar confidence-based ways to actively annotate examples (Su et al., 2022; Diao et al., 2023). Selective Annotation Makes Language Models Better Few-Shot Learners (Su et al., 2022) Active Prompting with Chain-of-Thought for Large Language Models (Diao et al., 2023) 3: The paper also misses some important details on the approach and the baseline Regarding the proposed approach: * When getting the post-hoc explanations. Does it explain the ground truth label or the predicted label? (it is not clearly explained in line 172) * How does the method get the initial predictions, in order to determine the misclassified examples? Is it obtained using few-shot AO or few-shot CoT? * It seems no information regarding the size of the validation set is provided. Regarding the baselines: * It seems the CoT baselines and AO baselines use few-shot prompting. How is the shot selected? Are they randomly selected? How many shots are used? * How are the CoTs for the CoT baseline written? Based on line 129, the CoTs are taken from Wei et al. (2022), but the original CoT paper does not include prompts for these datasets. Are they from Suzgun et al. (2022)? In addition, it would be also helpful if the appendix can include some prompt examples. 4: The paper uses a subset from big-bench-hard that mainly focuses on understanding linguistic concepts. On these datasets, CoT typically leads to no improvements or minor improvements, compared to AO. While it is understandable that the paper chooses to focus on understanding linguistics concepts, it would still be beneficial to benchmark the effectiveness of this approach on a broader range of multi-step reasoning tasks (e.g., GSM) where CoT shows substantial improvements over AO. And on these tasks, using rationales consisting of just top-k words may be less effective. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See weakness Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The paper discusses the limitations in appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments and for acknowledging the novelty and insights presented in our work. In the subsequent sections, we will address the specific questions and comments raised by the reviewer. Furthermore, we intend to incorporate all our responses and discussions into the final version of the paper. **Impact of Post Hoc Explanations** In the table below, we show that the performance of AO with the samples selected using step-2 of AMPLIFY, denoted by AO-AMPLIFY, is worse than AMPLIFY which suggests that it is not only the sample selection but also the explanations for each sample that helps in improving LLM performance. |Experiment Tasks [LLM : GPT3.5]|AO|AO-AMPLIFY|AMPLIFY| |------------------|---|----------|--------| |CausalJ|57.8|63.1|**76.3**| |RuinN|69.6|73.0|**77.5**| |SalientT|43.2|48.7|**60.8**| |CSQA|75.7|75.1|**77.9**| |CoinFlip|52.9|55.2|**65.3**| It is important to note that we cannot experiment with CoT using samples chosen by AMPLIFY because CoT has hard-coded prompts which were created with human assistance for only a fixed set of samples for which their method performed the best [2]. **Other Baselines** We experimented with two other baselines Auto-CoT and Vote-k (Su et al., 2022), which requires additional data to create the prompt, and found that our method performs better, as shown in the table below. We believe the reason behind the under-performance of Auto-CoT (and similar self-reasoning based methods) is that the reasoning generated by LLMs is not reliable and has a higher chance of being incorrect, which has also been shown in several research works [2,4]. Performance for Vote-k remained close to AO because it doesn’t include additional explanations for the few-shot samples in the prompt. |Experiment Tasks [LLM : GPT3.5]|AO|CoT|Auto-CoT|Vote-k|AMPLIFY| |---------------------------------|-------|------|----------|-------|--------------| |CausalJ|57.8|63.1|63.1|55.2|**76.3**| |RuinN|69.6|62.9|67.4|64.0|**77.5**| |SalientT|43.2|54.7|53.2|47.7|**60.8**| |CSQA|75.7|75.2|74.6|73.9|**77.9**| |CoinFlip|52.9|61.0|62.9|54.7|**65.3**| We cannot make a fair comparison with Diao et al. (2023) because their work involves human-assisted annotations for datasets that are different from those requiring complex language understanding, which are the tasks we focused on in our study. In contrast to the methods suggested by Su et al. (2022) and Diao et al. (2023), our approach eliminates the need for any annotation step, thereby removing any dependence on human assistance. >“It seems no information..” Thank you for your comment. In order to create the validation set, we randomly selected 40% of the total samples from the train set for each dataset. We will include this detail in the final draft. >“Does it explain the ground truth label..” Post hoc explanations are generated with respect to the ground truth label. The intuition behind this is to obtain relevant corrective signals, in the form of the tokens important for the prediction of the ground truth label, that could assist LLM to make the correct prediction. We have mentioned this detail in the figure caption and on lines 174-175, but we will elaborate on it further in the final draft for more clarity. >“How does the method get the initial predictions..” We used zero-shot prompting to obtain misclassified examples, i.e., the prompt only contains the sample from the validation set for the LLM to predict the response. We adopted this setting for two reasons: (1) to acquire as many misclassified examples as possible, thereby obtaining sufficient post hoc explanations for the LLM to make corrections, and (2) to maintain consistency with the post hoc explanation generation step. The post hoc explanation for each sample is computed on a zero-shot prompt, ensuring that the resulting explanation consists solely of important tokens from the input sample, rather than tokens from other samples as can occur with a few-shot prompt. >“It seems the CoT baselines and AO baselines..” For our experiments, we used the same few shot prompts provided in [1,2] for both CoT and AO baselines. >“How are the CoTs for the CoT baseline written? ..” Yes, the chain-of-thoughts for tasks other than the one experimented in [1] is taken from [2]. >“ appendix can include..” Thank you for your comment. We will revise the appendix with prompt examples. >“it would still be beneficial to benchmark the effectiveness…” Thank you for your comment. For our experiments, we focused on tasks that require complex language understanding [2], which are also cases where post hoc explanations have been found to be useful in capturing important features, hence providing useful explanations [3]. However, we also experimented with GSM8k (math problem dataset) used in CoT and observed that AMPLIFY outperforms AO but performs worse than CoT. shown in the table below. | Experiment Tasks | AO | CoT | AMPLIFY (gpt2-small) | | --- | --- | --- | --- | | GSM8k | 22.7 | **43.5** | 27.4 | While we outperform the standard few-shot approach, the underperformance of AMPLIFY when compared to CoT is expected because solving math problems requires multi-step reasoning, a complex function which is beyond what post hoc explanations are designed to explain. We further wish to clarify that we do not present AMPLIFY as a replacement for CoT, but rather as a superior alternative for tasks requiring complex language understanding; these are tasks for which obtaining chains-of-thought through human annotations is exceptionally challenging [2]. **References** [1] Wei, Jason, et al.Chain-of-thought prompting elicits reasoning in large language models.(2022) [2] Suzgun, Mirac, et al.Challenging big-bench tasks and whether chain-of-thought can solve them.(2022) [3] Madsen, Andreas, et. al. Post-hoc interpretability for neural nlp: A survey.(2021) [4] Zhang, Zhuosheng, et al. Automatic chain of thought prompting in large language models(2022) --- Rebuttal Comment 1.1: Comment: Thank you for the response and clarification. I appreciate the added results for isolating the impacts of selecting better examples. I've raised by score. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to review our response. We greatly appreciate your recognition of our work and for increasing the score.
Summary: This paper proposes AMPLIFY framework that leverages post-hoc explanations to generate rationales automatically for chain-of-thought prompting. The framework consists of four stages: (1) adopt a light-weight model as the proxy to compute explanations, (2) select few-shot samples misclassified by LLM, (3) use attribution scores to identify the important words as rationales, (4) prompt LLM for predictions. Experimental results show that AMPLIFY outperforms the previous methods by a large margin across diverse tasks. Strengths: 1. The paper is well-written and well-motivated. 2. The idea of leveraging the supervision from the small models to improve LLM is novel. 3. The results on seven tasks demonstrate the effectiveness of the proposed method. Weaknesses: Overall, the proposed method seems too heavy and its costs outweigh the performance benefits. 1. It requires an extra proxy model and fine-tuning on every target task in most cases, which can be impractical and laborious. 2. The second stage selects misclassified samples from the entire validation set and conducts filtering with a pre-defined metric, which breaks the constraints of "few-shot". 3. The third stage selects the top-k most important words to construct the rationale, which does not take into account the interactions between words that affect the model predictions. 4. The datasets they evaluated on are limited. The authors do not compare their method with CoT on more complex tasks like math problems and multi-hop reasoning. In contrast, CoT performs well under few-shot settings and only requires a few human-annotated rationales without any extra proxies and training costs. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Experiments show that fine-tuning is necessary in most cases (compared with "Random" baseline). What is the minimum size of a validation set to ensure that the explanations provided by the proxy model are reliable? (If it is too small, the proxy model is prone to overfitting.) 2. I'm curious about the performance of the proposed method on math problems compared with CoT. Is it general enough to improve the performance on various tasks? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors adequately address the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful to the reviewer for their insightful comments and suggestions. We are pleased to know they acknowledge our novel approach of using smaller models to augment the decision-making capabilities of larger ones. In the following sections, we address specific questions and comments raised by the reviewer. Furthermore, we intend to incorporate all our responses and discussions into the final version of the paper. >“requires an extra proxy model and fine-tuning..” While we experimented with the fine-tuning of proxy models, it's important to note that this step can be eliminated by using a more capable pretrained proxy model, while still achieving performance gains over baselines. The table below shows that the performance of LLM surpasses the baseline when we use gpt2-medium instead of gpt2-small, **without any fine-tuning**. This demonstrates that fine-tuning is not mandatory. Our motivation to show results for fine-tuning models in the paper is to demonstrate the improvement in LLM performance when the proxy model is further fine tuned. | Experiment Tasks [LLM : GPT3.5] | AMPLIFY (proxy model :gpt2-small) | AMPLIFY(proxy model :gpt2-medium) | CoT | | ----------------- | ---------- | ----------- | ----- | | Snarks | 88.8| **91.6** | 69.4| | Causal Judgment| 71.0 | **71.0** | 63.1 | | Ruin Names| 65.1 | **70.7** | 62.9 | | Formal Fallacies | 48.3 | **56.0** | 54.6 | | Salient Translation | 57.7 | **60.8** | 54.7 | | CSQA | 71.9 | **75.5** | 75.2 | | Coin Flip (OOD) | 55.4 | 59.6 | **61.0** | >“third stage selects the top-k most important words…” One of the benefits of AMPLIFY is that we do not need to necessarily account for the interaction between the words in the explanation, as the LLM already sees them in context. To confirm this hypothesis, we have conducted an additional experiment: We have now added a new variant of AMPLIFY that highlights the most influential keywords in the example rather than listing them or their interactions separately. This variant still results in significant improvements over baselines, as was observed in Table 1 of the paper. It requires a shorter context length and provides further empirical evidence supporting this hypothesis. This advantage of not providing word interactions also makes AMPLIFY more computationally efficient since providing word interactions in the prompt is a non-trivial and computationally expensive process [5,6,7]. >“datasets they evaluated on are limited...” Thank you for your comment. For our experiments, we focused on tasks that require complex language understanding [1], which are also cases where post hoc explanations have been found to be useful in capturing important features, hence providing useful explanations [2]. However, we also experimented with GSM8k (math problem dataset) used in CoT and observed that AMPLIFY outperforms AO but performs worse than CoT, shown in the table below. | Experiment Tasks | AO | CoT | AMPLIFY (proxy model : gpt2-small) | | --- | --- | --- | --- | | GSM8k | 22.7 | **43.5** | 27.4 | While we outperform the standard few-shot approach, the underperformance of AMPLIFY when compared to CoT is expected because solving math problems requires multi-step reasoning, a complex function which is beyond what post hoc explanations are designed to explain. We further wish to clarify that we do not present AMPLIFY as a replacement for CoT, but rather as a superior alternative for tasks requiring complex language understanding; these are tasks for which obtaining chains-of-thought through human annotations is exceptionally challenging [1]. >“The second stage selects misclassified samples from...” Thank you for your comment. We observed no change in performance when only the misclassified samples from the proxy model were selected, a common sample selection strategy in few-shot learning paradigms used in several other research works [3,4]. We believe the reason behind no change in performance is due to the fact that the misclassifications made by the proxy model generally encompass all the misclassified samples from the LLM, covering all samples that can provide corrective signals to the LLM. We will include this detail in the final draft for clarity. >“What is the minimum size of a validation set.. ” Thank you for your comment. To create the validation set, we randomly selected 40% of the total samples from the train set for each dataset. According to the accuracies of the fine-tuned proxy models presented in Table 5 of Appendix A.1, the accuracy after fine-tuning continues to be worse than the performance achieved by AMPLIFY. Therefore, it can be concluded that the proxy models have not overfit. >“the proposed method seems too heavy..” We address this comment in more detail in the global comment under “Trade-Off between Computational Cost and Performance”. **References** [1] Suzgun, M., et.al. Challenging big-bench tasks and whether chain-of-thought can solve them. [2] Madsen, Andreas, et. al "Post-hoc interpretability for neural nlp: A survey." [3] Zhang, Zhuosheng, et al. "Automatic chain of thought prompting in large language models." [4] Chang, Ernie, et al. "The selectgen challenge: Finding the best training samples for few-shot neural text generation." [5] Byrd, Roy J.,et. al. Identifying and extracting relations in text. na, [6] Agichtein, Eugene, et.al "Snowball: Extracting relations from large plain-text collections." [7] Bunescu, Razvan C., et. al "Extracting relations from text: From word sequences to dependency paths." --- Rebuttal Comment 1.1: Comment: Thanks for the clarification. I will raise my score. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to review our response. We greatly appreciate your recognition of our work and for increasing the score.
Rebuttal 1: Rebuttal: We are grateful to the reviewers for their valuable and insightful comments. It is appreciated that the reviewers found our work to be well-written and well-motivated, and considered our analysis to be novel and insightful in improving LLM performance. In this section, we would like to address the common concerns raised by the reviewers. **Trade-Off between Computational Cost and Performance** Our research is focused on enhancing the performance of Large Language Models (LLMs) in complex language understanding tasks. These tasks include causal judgment and sarcasm detection, among others, where creating an appropriate Chain-of-Thought (CoT) is extremely challenging. This weakness is studied in more detail in [1, 2, 3, 4]. We propose a method that provides post hoc explanations to assist LLMs in identifying key components of the input for accurate decision-making. This method offers significant performance improvements compared to the standard few-shot learning (AO) and CoT. Our method only requires the computation of post hoc explanations for a smaller proxy model ($\sim$0.1Bn params). This is thousands of times less computationally demanding than a single inference with an LLM ($\sim$175Bn params) and can be executed without additional GPU support. This makes the cost significantly lower compared to the substantial performance improvements in complex language understanding tasks. We hope this clarifies the trade-offs between computational cost and task performance resulting from our approach. **Importance of Fine-tuning Proxy Model** While we experimented with the fine-tuning of proxy models, it's important to note that this step can be eliminated by using a more capable pretrained proxy model, while still achieving performance gains over baselines. The table below shows that the performance of LLM surpasses the baseline (CoT) when we use gpt2-medium instead of gpt2-small, **without any fine-tuning**. This demonstrates that fine-tuning of the proxy model is not mandatory. Our motivation to show results for fine-tuning models in the paper is to demonstrate the improvement in LLM performance when the proxy model is further fine tuned. | Experiment Tasks [LLM : GPT3.5] | AMPLIFY (gpt2-small) | AMPLIFY(gpt2-medium) | CoT | | ----------------- | ---------- | ----------- | ----- | | Snarks | 88.8 | **91.6** | 69.4 | | Causal Judgment | 71.0 | **71.0** | 63.1 | | Ruin Names | 65.1 | **70.7** | 62.9 | | Formal Fallacies | 48.3 | **56.0** | 54.6 | | Salient Translation | 57.7 | **60.8** | 54.7 | | CommonsenseQA | 71.9 | **75.5** | 75.2 | | Coin Flip (OOD) | 55.4 | 59.6 | **61.0** | We would also like to highlight that our work represents the first pipeline that utilizes post hoc explanation to enhance LLM performance. We have aimed to make it modular to ensure that modifications for improving model performance are easy to implement. The modularity of our method also helps users to make most out of their computational resources. **Other Baselines** Reviewers suggested few baseline methods other than the ones used in our experiments. Hence, we experimented with these baselines, i.e, Auto-CoT[2] and Vote-k[5], and found that our method performs better, as shown in the table below. We believe the reason behind the under-performance of Auto-CoT (and similar self-reasoning based methods) is that the reasoning generated by LLMs is not reliable and has a higher chance of being incorrect, which has also been shown in several research works [1,2,4]. Performance for Vote-k remained close to AO because it doesn’t include additional explanations for the few-shot samples in the prompt. | Experiment Tasks [LLM : GPT3.5] | AO | CoT | Auto-CoT |Vote-k | AMPLIFY | |---------------------------------|-------|------|----------|-------|--------------| | Causal Judgment | 57.8 | 63.1 | 63.1 | 55.2 | **76.3** | | Ruin Names | 69.6 | 62.9 | 67.4 | 64.0 | **77.5** | | Salient Translation | 43.2 | 54.7 | 53.2 | 47.7 | **60.8** | | CommonsenseQA | 75.7 | 75.2 | 74.6 | 73.9 | **77.9** | | Coin Flip (OOD) | 52.9 | 61.0 | 62.9 | 54.7 | **65.3** | **References** [1] Suzgun, M., et.al. (2022). Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261. [2] Zhang, Zhuosheng, et al. "Automatic chain of thought prompting in large language models." arXiv preprint arXiv:2210.03493 (2022). [3] Turpin, M.,et. al(2023). Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting. arXiv preprint arXiv:2305.04388. [4] Lanham, Tamera, et al. "Measuring Faithfulness in Chain-of-Thought Reasoning." arXiv preprint arXiv:2307.13702 (2023). [5] Su, Hongjin, et al. "Selective annotation makes language models better few-shot learners." arXiv preprint arXiv:2209.01975 (2022).
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
The Pursuit of Human Labeling: A New Perspective on Unsupervised Learning
Accept (spotlight)
Summary: The paper proposes a framework, called HUME, for providing human labels without any supervision. The method uses an assumption that classes derived from human labelling are linearly separable regardless of the representation space used to represent the dataset, ie invariant to sufficiently strong representation spaces. The authors show that this method generates labels that are correlated with human labeling (ie ground truth) of the dataset. HUME also shows good performance on STL-10 (ie better than a supervised linear classifier), and comparable performances on other datasets. Results are even more striking, for all the datasets used, when compared against other unsupervised methods. Finally, HUME provides reliable samples on CIFAR-10. Strengths: From the related work explained in the paper, this is the first work in the literature to provide a method that does not rely on semantic knowledge of samples, eg clustering, but rather it uses a novel generalization perspective. The idea of using two pretrained models to generate human-like labels under an unsupervised manner is simple and interesting. The text is clear and overall well-written. The related work section is well expanded. Weaknesses: The paper could improve the analysis on the correlation of their labels with the ones humans provide, ie it’s not clear whether it generalizes to other datasets/settings, and this is a very important validation experiment of the proposed method since it’s part of the hypothesis of the paper (ie that human labeled tasks are linearly separable in a sufficiently strong representation space). Another analysis that could be added to the appendix is on the reliability of samples for other datasets (eg expands Fig. 3a of CIFAR-10 for other datasets). For now, it is only shown to work for CIFAR-10. Minor typo: line 107 “we only train linear classifiers on top pretrained representations” should probably have an “of” between “top” and “pretrained” words. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * What do the authors mean by “unbiased estimate” (line 90)? * Which distance measure is used for fig. 1? Could this be clarified in the paper? * Could the authors provide in appendix the same fig. 1 but for other datasets (eg CIFAR-100-20 and STL-10)? Since being well-correlated with humans is essential for the proposed method, it would be interesting to see if this result generalizes for other datasets as well. * In the paper it is mentioned in both Definitions 1 and 2 the sentence “attains low test error” (lines 101 and 105), but how low is not commented throughout the text and this is left a bit uncommented… could the authors expand on that? Could the authors also provide some analysis/experiments on this front? * Could the authors provide a better explanation for the difference between inductive and transductive procedures? Is it the case that the inductive one a model is learned during training, and then just the model is used to make predictions on unseen data, and in the transductive case the data is kept for the inference as well (eg using some form of nearest neighbor analysis)? * In Table 1, could the authors provide an intuition behind why the results seem to improve more for the transductive than for the inductive procedure? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes, limitations have been well addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging the simplicity of our methodology and the novelty of the generalization perspective for unsupervised learning. We are also glad that the reviewer appreciates the overall quality and soundness of our paper as well as thoroughness of the related work section. Below, we provide a detailed response to the reviewer’s questions and we hope that the provided clarifications will help in increasing the reviewer's confidence. >*The paper could improve the analysis on the correlation of their labels with the ones humans provide, ie it’s not clear whether it generalizes to other datasets/settings… Another analysis that could be added to the appendix is on the reliability of samples for other datasets* We thank the reviewer for suggesting additional analysis. In the response to the reviewer’s suggestion we include the requested evaluation. First, Fig. 1a and Fig. 1b in the rebuttal pdf shows the strong correlation ($\rho=0.9$) on the STL-10 dataset and fairly positive correlation ($\rho=0.47$) on the CIFAR-100-20 dataset. Secondly, Fig. 2a and Fig. 2b in the provided rebuttal pdf show that HUME can produce up to 100 almost perfectly accurate samples per class (>= 99% accuracy) on the STL-10 dataset and up to 100 reliable samples per class (>= 85% accuracy) on the CIFAR-100-20 dataset. We will supplement the future revision of the paper with the provided analysis to further improve the presentation of our work. >*What do the authors mean by “unbiased estimate” (line 90)?* The true risk of a model $f(\cdot)$ on task $\tau(\cdot)$ is $E_{x \sim p_{data}} \mathcal{L}(f(x), \tau(x))$. Since the true data distribution $p_{data}$ is not available we estimate it using samples from the given dataset $x_1, \dots, x_{N_{test}} \sim p_{data}$ using Eq. (1) which is unbiased estimate of the true risk. We will clarify this in the revised paper. >*Which distance measure is used for fig. 1? Could this be clarified in the paper?* We thank the reviewer for noticing the missing explanation of the used distance measure. Caption of Fig. 1 refers to this distance as a generalization error. Indeed, we compute it as follows: $\frac{1}{N_{test}}\sum_{i=1}^{N_{test}}[\tau_{gt}(x_i) \neq \tau(x_i)]$ where $\tau_{gt}(\cdot)$ is the ground truth labeling of the corresponding dataset and $\tau(\cdot)$ is the labeling found by HUME. We will further clarify this in the future revision of the paper. >*In the paper it is mentioned in both Definitions 1 and 2 the sentence “attains low test error” (lines 101 and 105), but how low is not commented throughout the text and this is left a bit uncommented… could the authors expand on that?* We thank the reviewer for suggesting the improvement of the presentation of our paper. By “attains low test error” we mean that the ground truth labeling achieves the lowest generalization error (Eq. 1) and the lowest value of HUME’s objective (Eq. 2) correspondingly for Definition 1 and Definition 2. Indeed, these definitions are key assumptions of HUME which are then validated using correlation plots (Fig. 1 in the main paper and Fig. 1ab in the rebuttal pdf). We will add these clarifications in the future revision of our work to further strengthen the presentation of our work. >*Could the authors provide a better explanation for the difference between inductive and transductive procedures? Could the authors provide an intuition behind why the results seem to improve more for the transductive than for the inductive procedure?* Each dataset consists of train $D_{train}$ and test $D_{test}$ splits. Inductive setting corresponds to training HUME only on $D_{train}$ and then evaluating on $D_{test}$. Since HUME is an unsupervised method and does not utilize any labels, we can also evaluate HUME in the transductive setting which corresponds to training HUME on the entire dataset $D = D_{train} \cup D_{test}$ and evaluating on $D_{test}$. Thus, transductive setting utilizes more data to train HUME expectedly resulting in better performance. We will explain the difference between inductive and transductive settings in the future revision of the paper. --- Rebuttal Comment 1.1: Comment: Thanks authors for the detailed rebuttal. My concerns have been addressed, and I have updated my score accordingly. --- Reply to Comment 1.1.1: Comment: We thank the reviewer very much for the response and for raising the score. We are happy that the reviewer finds our response effective.
Summary: This paper proposes a novel unsupervised learning algorithm called HUME. HUME aims to minimize the disagreement between two linear models. The two linear models consume features from different backbones, and thus they are iteratively optimized to map difference feature spaces into the same label space. HUME has achieved very promising results on extensive experiments conducted in this paper. Strengths: (1) This paper provides a fundamentally new view to tackle unsupervised learning by searching for consistent labelings between different representation spaces, i.e., human-labeled tasks are invariant to sufficiently strong representation spaces, it gives valuable insights to the following research works (2) HUME outperforms existing state-of-the-art unsupervised learning methods by a large margin and can achieve comparable or better performances than supervised learning on some tasks (3) HUME can produce reliable labels for semi-supervised learning to further boost the performance, which provides a good starting point for applications in many low-resource scenarios Weaknesses: I don't find substantial weakness from this paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I am personally interested in applying semi-supervised learning to the reliable pseudo labels. Could you provide the following results? (1) Comparison of using reliable pseudo labels versus human labels (2) Results on more datasets This is not obligatory, and you could put the results in the appendix. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: The authors have raised 2 limitations and proposed reasonable solutions for mitigation. I don't find other limitations of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the very positive evaluation of our work and acknowledging the fundamentally new perspective on unsupervised learning which our work proposes. We are glad that the reviewer appreciates the extensiveness of our experiments and finds our results very promising. We are also glad that the reviewer recognizes the importance of tackling the well-established unsupervised learning problem from a fresh standpoint that can facilitate future research. >*Comparison of using reliable pseudo labels versus human labels and results on more datasets* We agree with the reviewer on the importance of supplementing the evaluation with additional datasets to further strengthen the experimental part of our work. In response to the reviewer’s request we added the following additional experiments: (i) accuracy plots of reliable samples on the STL-10 and CIFAR-100-20 datasets, (ii) semi-supervised learning using reliable samples on CIFAR-10 and (iii) evaluation of HUME on large scale fine-grained ImageNet-1000k benchmark. We will add these results in the future revision of the paper. First, Fig. 2a and Fig. 2b in the provided rebuttal pdf show that HUME can produce up to 100 almost perfectly accurate samples per class (>= 99% accuracy) on the STL-10 dataset and up to 100 reliable samples per class (>= 85% accuracy) on the CIFAR-100-20 dataset. Secondly, we apply the state-of-the-art semi-supervised method FreeMatch[1] on the CIFAR-10 dataset with 1,4, 25 and 400 samples per class. We compare the performance of FreeMatch with ground truth labels to FreeMatch supervised with HUME’s reliable samples (Table 3 in the rebuttal pdf). The results show that FreeMatch with HUME’s reliable samples performs on par with FreeMatch with ground truth labels. The only exception is CIFAR-10 with 4 samples. However, HUME produces 4 perfectly accurate reliable samples per class (see Fig. 3a in the main paper) and the discrepancy in this setting can be alleviated by averaging the results across several 4 reliable samples per class as it is done in the standard SSL evaluation. Due to the time constraints during the rebuttal period and substantial time needed to run FreeMatch, we could only do a single run for each setting. We will provide the complete results with multiple runs and standard deviation in the future revision of the paper. Last but not least, we add ImageNet-1000k dataset to evaluate HUME on the challenging large scale fine-grained classification benchmark. Table 1 in the rebuttal pdf shows the significant improvement of 24% in accuracy over existing large-scale unsupervised baselines proposed in the literature, thus confirming the applicability of HUME to more challenging problems. [1] FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning. ICLR 2023
Summary: This work proposes HUME, an unsupervised framework for inferring labels of an image dataset. Based on the assumptions of linear separability and model invariance, the authors propose an objective that utilizes two self-supervised models, one on the target dataset and another using large-scale pre-training. Specifically, a linear mapping is applied on the first model representation to generate pseudo labels, which is updated by MAML to minimize classification loss of another linear classifier on the second representation. HUME is empirically evaluated on three image datasets, outperforming deep clustering and sometimes also linear probing baselines. Strengths: - The paper is well organized, with the entire model built around two key assumptions (linear separability and model invariance). Strong justifications are provided for each component. - Experiments are conducted on multiple dataset and models to validate the generalizability of HUME. - Ablations studies are included on aggregation over n runs, regularization weight, self-supervised representation etc. Weaknesses: - My biggest concern is on the assumption of linear separability on pretrained representations. This is probably true for easy-to-classify datasets (e.g. CIFAR-10) but less likely for fine-grained tasks (e.g. CIFAR-100). There exists much more challenging vision datasets than the ones studied in the paper, where it may be difficult, if at all possible, to find a feature space that linearly separates all classes. I doubt HUME will scale favorably to these datasets. - Results in table 1 seems to confirm this suspicion, as the relative performance of HUME to linear probing drops quickly as the datasets becomes more difficult. - One may further argue that if the classes are already linearly separable, then only a few labeled examples per class are needed to achieve high accuracy, which should not cost substantial human effort to annotate. The most demanding tasks to annotate are often the ones for which a good representation does not exist. - The fairness of experimental comparisons is questionable: - HUME makes use of large pretrained models such as DINO, unlike other unsupervised methods that are learned on target dataset only. I am not sure to what extent the improvement should be attributed to the stronger feature representation of large models. The authors can consider adding comparison to deep clustering or linear probing using large models as backbone. - The authors compared to a variant of SPICE without joint training ($\rm SPICE_S$). The standard SPICE outperforms HUME by jointly updating the feature model. - If the class vocabulary is known, this also enables zero-shot classification by vision-language models (e.g. CLIP). I am curious how fully unsupervised methods like HUME compares to CLIP in assigning pseudo labels, or if the two can be combined into a stronger model (not just using CLIP ViT as a feature extractor, but facilitating its joint embedding space for image and text). - Despite the claim that HUME can benefit semi-supervised learning, no SSL models were trained and evaluated to verify this. The accuracy of reliable samples serves as an indirect proof, but cannot replace full semi-supervised experiments, e.g., by following the joint training step in SPICE. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Using 300 iterations (supp. L564) for the inner loop of MAML appears quite impractical. I would appreciate if the authors can comment on the computational cost of HUME, or correct any misunderstanding I had with the implementation. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are discussed in the paper. There is no discussion of broader societal impacts as the paper focus on foundational learning algorithms; though the authors may consider discussing the possibility of biases of large pretrained models being transferred to the pseudo labels through HUME, etc. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for valuable feedback and for appreciating the coherence of our presentation, strong justification of each component of our framework and extensiveness of our experiments. *The biggest concern of the reviewer is the assumption of linear separability of pretrained representations.* **Supervised linear probe** is a gold standard evaluation protocol in the self-supervised/weakly supervised representation learning (e.g., [1-3])]. This whole research area assumes that the produced representations are strong enough to be separated by a linear classifier. For example, [2] shows that supervised linear probe on top of CLIP-ViT-L-14 features outperforms the best known **supervised** model on 21/27 datasets. DINOv2 [1] provides in-depth study and shows the outstanding performance of supervised linear probe on a wide variety of tasks such as domain generalization, image and video classification, fine-grained benchmarks etc. In addition to strong empirical evidence, recent works [e.g., 4-6] study the linear separability of contrastive representations from the theoretical perspective. For example, [4] theoretically analyzes when contrastive representations exhibit linear separability in unsupervised domain adaptation, while [5] provides guarantees on the downstream performance of a linear model on top of pretrained representations. All these results validate the linear separability assumption of pretrained representations. Our work provides a new view on tackling unsupervised learning from this perspective and shows viability of the proposed concepts in practice. We will add a detailed exposition of this assumption in the revised paper. We hope that these clarifications help the reviewer to reconsider the overall evaluation of our work. [1] DINOv2: Learning Robust Visual Features without Supervision. arXiv preprint 2023. [2] Learning Transferable Visual Models From Natural Language Supervision. ICML 2021 [3] A Simple Framework for Contrastive Learning of Visual Representations. ICML 2020 [4] Beyond Separability: Analyzing the Linear Transferability of Contrastive Representations to Related Subpopulations. NeurIPS 2022. [5] A Theoretical Analysis of Contrastive Unsupervised Representation Learning. ICML 2019 [6] Contrastive estimation reveals topic posterior information to linear models. JMLR, 2021 >*I doubt HUME will scale favorably to more challenging vision datasets.* To show scalability to large datasets we now included results on the ImageNet-1k dataset. Results in Table 1 (rebuttal pdf) show the remarkable improvement of 24% in accuracy over existing large-scale unsupervised baselines from the literature, thus confirming the scalability of HUME to challenging fine-grained benchmarks. >*The authors can consider adding comparison to deep clustering or linear probing using large models as backbone.* We now include an additional baseline by clustering DINO representations. Table 2 (rebuttal pdf) shows that HUME achieves 51%, 17% and 18% improvement in accuracy over this baseline on the STL-10, CIFAR-10 and CIFAR-100-20 datasets, respectively. >*Despite the claim that HUME can benefit semi-supervised learning, no SSL models were trained and evaluated to verify this.* We thank the reviewer for suggesting this analysis. Any SSL method can be applied after HUME. We now applied the state-of-the-art SSL method FreeMatch on the CIFAR-10 with 1,4,25 and 400 samples per class. We compare the performance of FreeMatch w/ ground truth (GT) labels to FreeMatch w/ HUME’s reliable samples (Table 3 in the rebuttal pdf). Results show that FreeMatch w/ reliable samples performs on par with FreeMatch w/ GT labels except on CIFAR-10 with 4 samples.However, HUME produces 4 perfectly accurate samples per class (see Fig. 3a) and the discrepancy in this setting can be alleviated by averaging the results across several 4 reliable samples per class as it is done in the standard SSL evaluation. Due to the time constraints during the rebuttal and substantial time needed to run FreeMatch, we could only do a single run for each setting. We will provide the complete results with multiple runs in the revised paper. >*The authors compared a variant of SPICE without joint training. …* SPICE includes 3 steps: (1) self-supervised pretraining, (2) clustering, and (3) joint training with SSL method. Given reliable samples produced in step (2), any SSL method can be applied in the step (3) for both SPICE and HUME. Thus, we directly compared the crucial clustering step between SPICE and HUME since this step is used to generate reliable samples and solves unsupervised learning task. Results show that HUME significantly outperforms SPICE thus consequently results in better pseudo-labels. Moreover, the additional results in the previous answer demonstrate that running SSL methods on reliable samples of HUME is comparable to running SSL on ground truth labels. >*I am curious how fully unsupervised methods like HUME compares to CLIP...* HUME can be easily applied for image-text data where $\phi_1(\cdot)$ is the text encoder and $\phi_2(\cdot)$ is the image encoder. We agree that it is an interesting research direction to combine the two but it is out of scope of this work focused on single-modality data and unsupervised learning setting compared to zero-shot in CLIP. We leave this for future work. >*Using 300 iterations for the inner loop of MAML appears quite impractical…* HUME’s inner optimization is not sensitive to the hyperparameters since HUME’s inner optimization is a convex optimization problem. For simplicity, HUME utilizes cold-start bilevel optimization (BLO) [1] but thanks to the form of HUME’s objective, any sophisticated BLO algorithm, e.g. [2], can be applied to improve the convergence rate and reduce iteration complexity of HUME. [1] On Implicit Bias in Overparameterized Bilevel Optimization. ICML 2022 [2] Efficiently Escaping Saddle Points in Bilevel Optimization. arXiv 2023 --- Rebuttal Comment 1.1: Comment: Thank you for their detailed response! The additional results are encouraging and addressed my primary concerns, and I have updated my rating to borderline accept. I feel that some of the rebuttal experiments (ImageNet, DINO w/ clustering, SSL) should be expanded and included in the paper to make the arguments more convincing. I am still a bit concerned that unsupervised learning is less critical when strong representation spaces exist ("if the classes are already linearly separable, then only a few labeled examples per class are needed to achieve high accuracy, which should not cost substantial human effort to annotate" in original review), though I see the potential of HUME as a bridge between unsupervised and semi-supervised learning in eliminating the need to label even a few examples. I would love to know the authors' view on this. Overall I am cautiously optimistic that this work presents sufficient contributions to be accepted to NeurIPS. --- Reply to Comment 1.1.1: Comment: > Thank you for their detailed response! The additional results are encouraging and addressed my primary concerns, and I have updated my rating to borderline accept. I feel that some of the rebuttal experiments (ImageNet, DINO w/ clustering, SSL) should be expanded and included in the paper to make the arguments more convincing. We thank the reviewer for the response and for raising the score. We are glad that our clarifications helped to increase the reviewer's confidence in our work. We will include these additional experiments in our paper and we thank the reviewer for the suggestions. Below we address the remaining concern of the reviewer. We hope that our response will further help to increase the reviewer's confidence in our work. >I am still a bit concerned that unsupervised learning is less critical when strong representation spaces exist ("if the classes are already linearly separable, then only a few labeled examples per class are needed to achieve high accuracy, which should not cost substantial human effort to annotate" in original review), though I see the potential of HUME as a bridge between unsupervised and semi-supervised learning in eliminating the need to label even a few examples. I would love to know the authors' view on this. We thank the reviewer for an additional question. In response to the reviewer’s feedback, we have conducted additional experiments that compare HUME to few-shot learning baselines. Namely, starting from the strongest DINO representation space, we employ two different few-shot baselines: (i) following [1], we compute a prototype for each class on top of DINO representations using few-labeled examples and then assign examples to the nearest prototype (we refer to it as proto baseline), and (ii) we train a linear classifier on top of DINO representations using a few labeled examples (we refer to it as linear baseline). Remarkably, the results show that HUME consistently outperforms these two baselines in the 1-shot and 2-shot learning settings on all datasets. In the 3-shot learning setting HUME outperforms these baselines on the STL-10 dataset, matches performance on the CIFAR-10 dataset, and these baselines show better performance only on the CIFAR100-20 which in this setting requires *60 labeled examples*. In particular, in the 1-shot learning setting HUME significantly outperforms proto baseline by 39%, 24% and 44% on the STL-10, CIFAR-10 and CIFAR-100-20 datasets, respectively. Similarly, HUME outperforms linear baseline by 39%, 23% and 37% on the STL-10, CIFAR-10 and CIFAR-100-20 datasets, respectively. In the 2-shot learning setting, HUME outperforms proto baseline by 18%, 7% and 6% on the STL-10, CIFAR-10 and CIFAR-100-20 datasets, and linear baseline by 22%, 8% and 4%, respectively. It is important to emphasize that in this setting these baselines use *20 labeled examples* for STL-10 and CIFAR-10, and *40 labeled examples* for CIFAR-100-20 while HUME works *fully unsupervised*. Only, in the 3-shot learning setting we start seeing the benefits of few-shot baselines but only on the CIFAR-100-20 dataset. In particular, on the STL-10 dataset HUME still outperforms proto and linear baselines by 9% and 11% respectively, and matches performance of these baselines on the CIFAR-10 dataset, and proto and linear baselines outperform HUME by 11% and 13% on CIFAR-100-20 dataset by using 60 labeled examples. On the STL-10 dataset, we observe that HUME keeps outperforming these baselines even in a challenging 5-shot learning setting where these baselines use 50 labeled examples. We would also like to emphasize that obtaining few-labeled examples per class is in reality hard because it requires a strong prior knowledge of knowing the complete space of all possible classes in advance and then labeling a few examples per class. In many real-world tasks, this is impractical and very difficult. We thank the reviewer for this question and we will also include few-shot baselines in our paper. We believe these results further strengthen our work. Overall, our results show that although being completely unsupervised, HUME (i) outperforms *fully supervised* linear model on top of MOCO representations on the STL-10 dataset and matches the performance on the CIFAR-10 dataset (main paper results); (ii) outperforms challenging *few-shot learning baselines* ran on top of DINO representations in the 1-shot and 2-shot learning scenarios, and (iii) *semi-supervised method FreeMatch* using reliable samples from HUME matches the performance of the FreeMatch with ground truth labels. [1] Snell et al. Prototypical Networks for Few-shot Learning. NeurIPS 2017.
Summary: The work presents a new method for inferring human labeling without supervision. The main idea of the method is that human labels for a dataset should be linearly separable for all good representations, and relatively invariant to the representation space. The method uses bi-level optimization with an inner loop that optimizes for the pseudo-labels of a training set, and an outer loop which optimizes for the test loss of a classifier trained on the pseudo-labels. HUME achieves state-of-the-art performance on standard unsupervised learning benchmarks, CIFAR and STL-10. Strengths: - The insight that human labels should be linearly separable in any sufficiently strong representation space is original and interesting. Showing that the objective function proposed in HUME is well correlated with human labeling is surprising, and would likely be useful to future work. - HUME outperforms state-of-the-art on standard unsupervised learning benchmarks by a fair margin and impressively outperforms the supervised linear probe on STL-10. - The paper is well organized and the writing is clear. Weaknesses: HUME uses DINO and MOCO while other methods only use MOCO. It could be that DINO is a better representation given that HUME with DINO outperforms HUME with other representations. It’s also unclear whether the performance gain is coming from ensembling multiple models or the optimization procedure of HUME. It would be useful to see an ablation where HUME is trained only on a MOCO representation for a straight-forward comparison to other unsupervised methods and to understand how much each component of HUME contributes. As stated in the limitations, the number of classes and label distribution must be known a priori. Other works in this subfield make the same assumption so this weakness is not particular to HUME. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Can the method scale to larger datasets considering that second-order gradients need to be calculated? Does approximating the Hessian affect performance? - MAML is known to be difficult to optimize with respect to hyper-parameters. Does this make HUME sensitive to hyper-parameters such as the number of inner-loop steps, learning rate, etc? - In equation 3, you constrain the weights of the linear layer, $W_1$, to be orthonormal, but I didn’t see a mention of the method you’re using to do so. What is the reasoning for this design choice? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are well addressed by the authors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive evaluation of our work and for acknowledging the originality of the proposed method as well as its significance for future research. We are glad that the reviewer finds the results impressive and our paper well written. >*HUME uses DINO and MOCO while other methods only use MOCO…* To show that the increase in performance comes from the HUME’s optimization procedure, we included an additional baseline which, like HUME, utilizes both DINO and MOCO representation spaces at the same time (Table 2 in the rebuttal pdf) . We generate embeddings from DINO and MOCO, concatenate these embeddings and then perform clustering on top of concatenated embeddings. Results show that HUME achieves 18%, 9% and 8% improvement in accuracy over the baseline on the STL-10, CIFAR-10 and CIFAR-100-20 datasets, respectively. We will add these results in the revised paper. >*It’s also unclear whether the performance gain is coming from ensembling multiple models or the optimization procedure of HUME.* We thank the reviewer for the insightful question. The results in Fig. 3b and Fig. D1 in the Appendix D show the ablations of using different aggregation strategies (ensembling). The leftmost strategy (at x = 1) corresponds to a single labeling (w/o ensembling), while the rightmost data point (at x = 100) corresponds to aggregation over all models. Results show that utilizing stronger pretrained models eliminates the need of ensembling many models, since they lead to more robust performance. Additionally, given the high correlation between HUME’s objective and human labeling, HUME achieves even better performance by aggregating only the top few models compared to our current strategy that aggregates all models. However, we do not optimize for this in our experiments since our goal is not to report the best results HUME can achieve but rather to demonstrate the correlation between HUME’s objective and ground truth labeling. In response to the reviewer's feedback, we will provide these clarifications in the revised paper. >*It would be useful to see an ablation where HUME is trained only on a MOCO representation …* We thank the reviewer for suggesting this evaluation; however, it is not applicable to HUME as it leads to a degenerate solution. The reason is that the key idea of HUME is to jointly use **two different** representation spaces to find ground truth data labeling over all possible labelings. HUME utilizes two representation spaces $\phi_1(\cdot)$ and $\phi_2(\cdot)$. $\phi_1(\cdot)$ is used by the task encoder $\tau(\cdot) = W_1 \frac{\phi_1(\cdot)}{\|\phi_1(\cdot)\|_2}$ to propose labelings of the data using linear layer $W_1$ at each iteration of the method. $\phi_2(\cdot)$ is used to assess the generalizability of the current proposal $\tau$ using linear layer $W_2$ in the representation space $\phi_2(\cdot)$. Since task encoder $\tau$ proposes labelings using linear layer $W_1$, **utilizing the same representation space** $\phi_2(\cdot)$, i.e., $\phi_2(\cdot) = \phi_1(\cdot)$, **will automatically lead to degenerate solutions**, i.e. $\forall\ W_1, \ W_2 = W_1$ can be easily inferred using logistic regression (inner optimization Eq. (4)) and, thus, will always be a generalizable task, which will prevent HUME from finding the ground truth labeling. We will explain this and clarify the importance of utilizing two **different** representation spaces in the revised paper. >*Can the method scale to larger datasets …* HUME is a scalable method that can be applied to the dataset of any size. The inner optimization problem in HUME is multiclass logistic regression solved by running stochastic gradient descent for a fixed number of iterations, thus depending only on the batch size and dimensionality of the representations rather than the dataset size. Additional experiments on large scale Imagenet-1000k benchmark provided in Table 1 (rebuttal pdf) further support wide applicability of HUME. It is also important to highlight that HUME’s optimization objective (Eq. 7) has a form of a stochastic bilevel optimization (SBO) problem with the convex inner part. Thus, MAML can be replaced by any sophisticated convex SBO algorithm which is a rapidly developing research area [1, 2]. We will comment on this in the revised paper. [1] Huang et al. Efficiently Escaping Saddle Points in Bilevel Optimization. arXiv 2023 [2] Dagreou et al. A framework for bilevel optimization that enables stochastic and global variance reduction algorithms. NeurIPS 2022 >*MAML is known to be difficult to optimize with respect to hyper-parameters. Does this make HUME sensitive to hyper-parameters …?* HUME’s inner optimization is not sensitive to the hyperparameters since HUME’s inner optimization is a convex optimization problem, i.e. multiclass logistic regression, thus it does not require extensive hyperparameter search. In our preliminary experiments on the CIFAR-10 we set a learning rate to 0.001 and number of iterations equal to 300 and fix them across all experiments. >*In equation 3, you constrain the weights of the linear layer, $W_1$, to be orthonormal, but I didn’t see a mention of the method you’re using to do so. What is the reasoning for this design choice?* Orthogonal weight matrix $W_1$ is parametrized using a product of elementary Householder reflectors, i.e., $W_1 = H_1 H_2 \dots H_m$, where each $H_i = I - \tau v_i v_i^T$. We utilize the standard PyTorch parametrizations module making the implementation straightforward in practice. We will add these details in the revised paper. This design choice may be viewed as learning prototypes for each class which is an attractive modeling choice for the representation space $\hat{\phi}_1(\cdot) = \frac{\phi_1(\cdot)}{\|\phi_1(\cdot)\|_2}$. Thus, $W_1 \hat{\phi}_1(\cdot)$ corresponds to the cosine similarities between class prototypes $W_1$ and the encoding of the sample $\hat{\phi}_1(\cdot)$ (lines 138-142). --- Rebuttal Comment 1.1: Title: Response to Author Rebuttal Comment: I thank the authors for their response. My concerns have been addressed and all details have been clarified. The insight of using linear separability across representations to derive human-like labels is interesting and I maintain my score of accept. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the response and for keeping confidence in our work. We are glad that our clarifications effectively addressed reviewer’s concerns. We are also happy to hear that the reviewer finds the key idea of our work interesting and useful for future work in this field.
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable feedback and positive evaluation of our work. We appreciate fruitful suggestions of the reviewers that helped to improve the overall presentation of our work. In the response to the reviewers’ feedback, we have conducted additional experiments shown in the attached pdf and included in the response to reviewers. The attached pdf includes: * Correlation analysis between HUME’s objective and ground truth labeling on two additional datasets, STL-10 and CIFAR-100-20 (Figure 1). The results show strong correlation ($\rho=0.9$) on the STL-10 dataset and fairly positive correlation ($\rho$=0.5) on the CIFAR-100-20 dataset. * Analysis of accuracy of reliable samples produced by HUME on two additional datasets, STL-10 and CIFAR-100-20 (Figure 2). The results show that HUME can produce up to 100 almost perfectly accurate samples per class (>= 99% accuracy) on the STL-10 dataset and up to 100 reliable samples per class (>= 85% accuracy) on the CIFAR-100-20 dataset. * Evaluation of HUME on the large-scale fine-grained ImageNet-1000k benchmark (Table 1). The results show that HUME achieves substantial improvements over existing large-scale unsupervised baselines proposed in the literature, outperforming the best baseline by 24% in terms of accuracy. * Additional baselines constructed by (i) running clustering on top of DINO representations, and (ii) running clustering on top of representations obtained by concatenating DINO and MOCO representations (Table 2). The results show that HUME outperforms these baselines by 18%, 9% and 8% in accuracy on the STL-10, CIFAR-10 and CIFAR-100-20 datasets, respectively. * Evaluation of semi-supervised learning method FreeMatch on reliable samples produced by HUME (Table 3). We compare FreeMatch performance on HUME”s reliable samples with FreeMatch performance on ground truth labeling. The results show that FreeMatch with reliable samples performs comparably to FreeMatch with ground truth labeling. We provide more details and address reviewers’ comments in the individual response to reviewers. We hope that our detailed response and additional experiments will help to increase reviewers’ confidence. Pdf: /pdf/4fd99be36164baa3a472a66401c7edaa05c48fee.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors propose a clever method that exploits recent findings about the effectiveness of linear probing across diverse representation spaces in order to discover semantically-meaningful clusters (i.e. ones that correspond to human labeling) by seeing which linearly-separable clusters are preserved across multiple representation spaces. Strengths: - HUME is a very clever method that beautifully exploits recent findings about the effectiveness of linear probing across diverse representation spaces. - The paper is well-written and easy to follow. Weaknesses: - It's unclear whether performance gains are driven by HUME or by the strength of the second representation space. Additional baseline experiments with linear probing and clustering on top of the large pre-trained model representations (DINOv2, CLIP, ViT) would help clarify this. Another nice baseline evaluation that could help would be to simply concatenate the embeddings from the two representation spaces and repeat the linear probing and clustering evaluations on that. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - When producing reliable samples for SSL, what is the class balance? The usefulness is impacted significantly by the diversity of the samples HUME can produce. I see in Fig 3a) that samples are produced from each class, but is performance roughly the same across all classes? - Are all three of the large-pretrained models that you use pre-trained in unsupervised ways (e.g. none were ever actually trained on the ImageNet classes)? - Unrelated to the review/scoring and purely out of curiosity, why did you name your method HUME? Is it an acronym for something? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: See weaknesses and questions sections. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive evaluation of our work and for acknowledging the methodological novelty of our method. We are glad that the reviewer finds HUME to be *a very clever method that beautifully exploits recent findings about the effectiveness of linear probing* as well as our paper to be well-written and easy to follow. Below, we address the reviewer's concerns and we hope that our response helps to increase the reviewer's confidence. >*It's unclear whether performance gains are driven by HUME or by the strength of the second representation space. Additional baseline experiments with linear probing and clustering on top of the large pre-trained model representations (DINOv2, CLIP, ViT) would help clarify this. Another nice baseline evaluation that could help would be to simply concatenate the embeddings from the two representation spaces and repeat the linear probing and clustering evaluations on that.* We thank the reviewer for suggesting to incorporate additional baselines. Based on the reviewer’s suggestion, we now included two additional baselines (1) clustering on top of the DINO representation space as the strongest representation space and (2) clustering on top of concatenated embeddings from DINO and MOCO. The results in Table 2 in the rebuttal pdf show that the performance gains are driven by HUME. Compared to DINO, HUME achieves an improvement of 51%, 17% and 18% in accuracy on the STL-10, CIFAR-10 and CIFAR-100-20, respectively. Compared to DINO+MOCO, HUME achieves 18%, 9% and 8% relative improvement in accuracy on the STL-10, CIFAR-10 and CIFAR-100-20, respectively. We will add these baselines in the future revision of the paper to further strengthen the evaluation part of our work. >*When producing reliable samples for SSL, what is the class balance? The usefulness is impacted significantly by the diversity of the samples HUME can produce. I see in Fig 3a) that samples are produced from each class, but is performance roughly the same across all classes?* We thank the reviewer for suggesting additional analysis of reliable samples produced by HUME. The detailed statistics requested by the reviewer are provided in the Table below. We follow standard protocol for the evaluation of semi-supervised learning methods [1] and consider 4, 100 samples per class on the STL-10 dataset and 1, 4, 25, 400 samples per class on the CIFAR-10 dataset. On the STL-10 and CIFAR-10 datasets HUME shows almost perfect balance and mean per class accuracy, i.e., up to 100 samples per class on STL-10 and up to 400 samples per class on CIFAR-10. **Per Class Balance** measures number of samples for each ground truth class, i.e., $\frac{1}{K}\sum_{k=1}^{K}\sum_{j \in R} [y_j = k]$ where $R$ is the set of indices of produced reliable samples and $y_j$ is the groundtruth label of sample $j$. **Per Class Accuracy** measures the average per class accuracy of the corresponding set of the reliable samples. The number in the brackets in the dataset title corresponds to the number of reliable samples per class. | | STL-10 (4) | STL-10 (100) | CIFAR-10 (1) | CIFAR-10 (4) | CIFAR-10 (25) | CIFAR-10 (400) | CIFAR-100-20 (1) | CIFAR-100-20 (10) | CIFAR-100-20 (50) | CIFAR-100-20 (100) | |--------------------|--------------|---------------|--------------|--------------|---------------|----------------|------------------|-------------------|-------------------|--------------------| | Per Class Balance | 4.0 +- 0.0 | 100.0 +- 1.4 | 1.0 +- 0.0 | 4.0 +- 0.0 | 25.0 +- 0.5 | 400.0 +- 1.3 | 0.6 +- 0.6 | 6.3 +- 5.2 | 26.3 +- 23.6 | 52.6 +- 45.6 | | Per Class Accuracy | 100.0 +- 0.0 | 99.6 +- 0.9 | 100.0 +- 0.0 | 100.0 +- 0.0 | 99.6 +- 1.2 | 99.7 +- 0.4 | 72.7 +- 44.1 | 62.3 +- 41.9 | 51.1 +- 46.5 | 48.9 +- 47.2 | We thank the reviewer for the insightful question and we will include these results in the final revision of the paper. [1] Wang et al. FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning. ICLR 2023 >*Are all three of the large-pretrained models that you use pre-trained in unsupervised ways (e.g. none were ever actually trained on the ImageNet classes)?* CLIP and DINO are pretrained in an unsupervised way using contrastive learning based methods. CLIP ViT-L/14 is pretrained on WebImageText and DINOv2 ViT-g/14 is pretrained on LVD-142M. The only model pretrained in the supervised manner is BiT-M-R50x1 pretrained on ImageNet-21k. However, although BiT-M was pretrained in a supervised regime it shows the worst performance of the three considered large pretrained models and DINO and CLIP both show much better transferability of the representations to downstream datasets. We will comment on this in the revised paper to improve the presentation of our work. >*Unrelated to the review/scoring and purely out of curiosity, why did you name your method HUME? Is it an acronym for something?* HUME comes both as the acronym for HUMan labEling and from David Hume’s surname. David Hume was a philosopher who proposed new views on existing things. In our work, we propose a new perspective on the well established unsupervised learning field, thus reflecting the name. In addition, Hume also was an Empiricist, meaning he believed "causes and effects are discoverable not by reason, but by experience". In our work we **empirically** observe the surprising correlation between generalization error of ground truth labelings of a data and the proposed optimization objective, thus reflecting the empirical research approach. --- Rebuttal Comment 1.1: Comment: Thank you for the excellent response! My concerns have been resolved, and I have updated my score accordingly. On a side note, the name is great and the acronym design, while definitely a bit of a stretch ("labEling"), is well-justified by the motivation and not even the most egregious acronym I've seen this reviewing season :-) --- Reply to Comment 1.1.1: Comment: We thank the reviewer very much for the prompt response and for raising the score. We are very happy that the reviewer finds our response effective. We are also happy to hear that the reviewer likes acronym of our method and the story behind it.
null
null
null
null
null
null
Addressing Negative Transfer in Diffusion Models
Accept (poster)
Summary: This paper discusses the use of diffusion-based generative models for various generative tasks, such as image, video, 3D shape, and text generation. The authors argue that negative transfer, which refers to competition between conflicting tasks leading to decreased performance, should be investigated and addressed in diffusion models. They observe a negative correlation between task affinity and the difference in noise levels, suggesting that adjacent denoising tasks are more harmonious. They also find evidence of negative transfer in diffusion model training, where utilizing a model trained exclusively on denoising tasks within a specific interval generates higher-quality samples compared to a model trained on all tasks simultaneously. To address negative transfer, the authors propose leveraging existing multi-task learning techniques and clustering denoising tasks into intervals. They demonstrate the effectiveness of their approach through experiments on image datasets, showing improved quality in generated images. Strengths: 1, The paper provides a comprehensive analysis of diffusion-based generative models, considering their performance and flexibility in various generative tasks, including image, video, 3D shape, and text generation. It highlights the achievements and potential areas for improvement in diffusion models. 2, The authors identify and address the issue of negative transfer in diffusion models. They observe evidence of negative transfer during model training and propose a clustering approach to mitigate its impact on denoising tasks. This analysis adds valuable insights to the field of multi-task learning and diffusion models. Weaknesses: 1, The experiments conducted in the paper are limited to specific datasets (FFHQ and CelebA-HQ) and specific types of diffusion models (Ablated Diffusion Model and Latent Diffusion Model). This limited scope may restrict the generalizability of the proposed approach to other datasets or types of diffusion models. The paper could have benefited from evaluating the method on a broader range of datasets and models. 2, While the paper presents empirical results to support the effectiveness of the proposed approach, it lacks in-depth theoretical analysis or mathematical formulation of the clustering strategy. A more rigorous theoretical analysis would have provided a deeper understanding of the approach and its underlying principles. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1, What is the proposed clustering approach for addressing negative transfer in diffusion models? How does it take into account noise levels and task affinity? 2, What are the limitations of the proposed clustering approach in terms of scalability and computational complexity? 3, How did the authors validate the effectiveness of their approach? Can you provide more details about the experimental setup and the results obtained? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: As stated in "Weakness". Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the insightful feedback provided by reviewer sPtA. We will address the raised concerns and revise the paper accordingly, as these comments contribute significantly to improving our work. --- ### **W1: Lack of experiments on different datasets and diffusion models.** Thank you for bringing up this point. We acknowledge the importance of validating our method across diverse models and datasets. In response, we extended our experiments to include the transformer-based DiT-S architecture on the ImageNet 256x256 dataset. The results are presented in **Fig. A-B in the Global Response**. Across different architectures and larger-scale datasets like ImageNet, our method consistently enhances FID, IS, and Precision across MTL methods compared to vanilla training. Notably, methods (UW, NashMTL) excelling in balancing are particularly effective. Additionally, convergence analysis in Fig. B highlights that our method (especially UW and NashMTL) noticeably accelerates convergence compared to vanilla training, and eventually leads to superior final performance. These experiments strongly affirm our method's generalizability. We will add these results to the final version of the manuscript. --- ### **Q1: Proposed clustering approach for addressing negative transfer.** The intuition behind task clustering is: - The clustering of tasks is necessary to apply MTL methods to diffusion models. Treating each denoising task as an independent task and applying MTL methods is simply not feasible due to instability and large computational complexity. - By clustering the denoising tasks into task groups, we best reap the benefits of multi-task learning. In other words, by letting the model treat similar tasks assigned to a task cluster simultaneously, we may expect that it prevents negative transfer among such tasks. Meanwhile, we use various methods suggested in MTL literature to reduce the negative transfer between tasks belonging to different task clusters. Considering these, we opt for the interval clustering method based on our observation that tasks close in timestep have higher task affinity scores. To demonstrate the effectiveness of interval clustering, we tried and applied MTL methods on top of task groups discovered by Higher Order Approximation (HOA)-based task grouping which does not constraint that tasks within a cluster must be neighboring in timesteps (See Appendix D.3). We found that interval clustering outperforms HOA, showing grouping tasks into disjoint intervals is a valid strategy. We consider timestep, SNR, and task affinity scores as clustering criteria. By considering SNR-based clustering costs, our method encourages the grouping of tasks with similar noise levels, effectively accounting for noise levels. By using task affinity-based objectives, our approach enhances task affinity among clusters, allowing for the inclusion of task affinity in cluster considerations. --- ### **W2 & Q2: In-depth theoretical analysis and computational complexity.** We appreciate your thoughtful feedback. Regarding the theoretical analysis of the clustering algorithm, we have built upon well-established principles outlined in [43, 65]. In Appendix G, we have detailed use of dynamic programming to optimize interval clusters. Let $n$ and $k$ denote the number of data points ($T$, in our case) and clusters, respectively. The interval clustering algorithm employs a matrix $D[i,m]$ to store minimum clustering costs for interval length $i$ and cluster count $m$, with a total memory cost of $O(nk)$. Each matrix element involves $O(n\omega(n))$ computation, resulting in time complexity of $O(n^2k\omega(n))$. For each of our three clustering cost functions: - **Timestep**: The calculation of cluster costs takes $O(n)$ time. Hence, this approach operates in $O(n^3k)$ time and uses $O(nk)$ memory. - **SNR:** Computation of SNR in constant time keeps its time complexity at $O(n^3k)$. Memory complexity is also $O(nk)$. - **Gradient:** Unlike the aforementioned methods, the gradient-based clustering capitalizes on precomputed task affinity scores $\mathrm{TAS}(t_x,t_y)$ for $x,y\in[1, T]$, resulting in a memory cost of $O(n^2)$. The clustering cost computation involves memory-referencing, resulting in time complexity of $O(n^3k)$ and memory $O(nk+n^2)$. Furthermore, it's worth noting that potential exists for further optimization, reducing time complexity to $O(n^2k)$, as shown in [A]. --- ### **Q3: Evaluation of the Proposed Method**. For fair comparison and analysis, our experimental setup follows that of prior research such as ADM [7], P2 [6], and LDM [47]. We provide a list of all our experiments and their corresponding experimental setup details below: - Section 5.1: Incorporating MTL methods with interval clustering dramatically improves diffusion training. Setups are in Section 5 and Appendix E.1. - Section 5.2: 1) MTL methods exhibits fast convergence. 2) Our methods improve negative transfer in diffusion models. Setups are presented in Section 5 and Appendix E.1. - Section 5.3: Our method is also effective on a more sophisticated training objective, P2. Setups are presented in Section 5 and Appendix E.1. - Appendix D.3: Interval clustering outperforms the HOA-based task grouping method in terms of incorporating MTL methods. Setups are presented in Appendix D.3. - Other experimental analyses: We conducted several empirical analyses about 1) the existence of negative transfer, 2) task affinity, 3) ablation on $k$, and 4) the behavior of MTL methods. In the final version, we will clearly present the above experimental settings by making explicit references. Furthermore, we release experimental code for explicitly sharing our experimental settings. --- ### **References** [A] Grønlund, et al., “Fast exact k-means, k-medians and Bregman divergence clustering in 1D.”, arXiv 2017.
Summary: This paper analyzes diffusion training from an MTL perspective and observes the negative transfer in diffusion training. Several multi-task learning algorithms are employed to address the negative transfer problem, leading to improved performance. Strengths: 1. The paper presents a detailed analysis of task affinity and negative transfer across different diffusion steps, which is interesting. 2. The incorporation of MTL methods has yielded a substantial improvement in the performance of the diffusion model. Weaknesses: 1. The paper lacks a comparison of time and GPU memory usage between the vanilla and MTL approaches. Typically, MTL methods require more time and significantly higher GPU memory. 2. It would be beneficial to include random weights and linear scalarization baselines. Several studies have suggested that linear scalarization and random weights can serve as strong baselines [1][2]. [1] Xin et al., Do Current Multi-Task Optimization Methods in Deep Learning Even Help?, NeurIPS 2022 [2] Lin et al., A closer look at loss weighting in multi-task learning, arXiv 2022 Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Have you considered utilizing the weights obtained by MTL approaches as the loss weights for training the model? For instance, using the weighted loss with the mean weight of each interval in Figure 3b. Would this approach yield comparable results? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: See weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We deeply appreciate the insightful comments by reviewer 8Ptq. The comments are very helpful in making our work more complete from an MTL perspective. We will address all raised concerns by the reviewer and revise the paper accordingly. --- ### **W1: Lack of comparison of time and GPU memory complexity.** Thank you for your valuable suggestion. First, we point out that clustering cost is almost free compared to the cost of diffusion model training, with the exception of gradient-based clustering which requires training of a diffusion model until convergence. Next, in order to assess the additional resource requirements incurred by MTL, we measured GPU memory usage and runtime when training with the LDM architecture and timestep-based clustering on the FFHQ dataset. The results are summarized in **Table A in the General Response**. We explain the results below: - PCgrad exhibits a runtime of 1.523 iterations/second, which is comparatively slower than vanilla training's pace of 2.108 iterations/second. This discrepancy primarily arises from the inherent requirement of PCgrad to compute per-interval gradients for every iteration. However, it is worth noting that PCgrad uses less GPU memory compared to vanilla training. This stems from the partitioning of minibatch samples across intervals for per-interval gradient calculation, resulting in a reduced number of samples employed for backpropagation compared to vanilla training. - NashMTL exhibits a marginal decrease in runtime with 2.011 iterations/second compared to vanilla training. Similar to PCgrad, this minor difference in runtime arises due to the requirement for per-interval gradients. However, NashMTL offers a practical speed-up strategy by computing per-interval gradients every few iterations, in contrast to PCgrad. This distinction results in NashMTL outperforming PCgrad significantly in terms of runtime. However, NashMTL uses more GPU memory, which can be attributed to the caching of the parts for calculating weights. - UW exhibits comparable runtime and GPU memory usage to vanilla training, owing to its utilization of weighted loss and efficient weight updates through a single backpropagation pass. We also note that applying MTL methods only incurs additional compute during training, and does not affect the inference time. We will add this additional information to our final manuscript. --- ### **W2: Additional baselines.** We are truly grateful for your valuable suggestion. Indeed, Linear Scalarization (LS) and Random Loss Weighting (RLW) should serve as simple and strong baselines, and comparison of our methods to them will establish the necessity of applying such sophisticated MTL methods as PCGrad, NashMTL, and Uncertainty Weighting. Accordingly, we provide results for LS and RLW on the FFHQ dataset using ADM architecture in **Table B of the Global Response**. We explain the results below: - As seen in the results, LS achieves slightly worse performance than vanilla training, which suggests that simply re-framing the diffusion training task as an MTL task and applying a naïve MTL loss is not enough. - Also, RLW achieves much worse performance compared to vanilla training. It appears that the randomness introduced by the weighting interferes with diffusion training. This indicates that sophisticated weighting schemes such as UW and NashMTL indeed are responsible for significant performance gain. Overall, these additional experiments support the effectiveness of the applied MTL methodologies. We will include these results in the final manuscript. --- ### **Q1: Utilizing weights obtained by MTL approaches.** Thank you for the suggestion. We find this approach very intriguing. As suggested, we obtained the average weight assigned to each task interval (for both UW and NashMTL), upon finishing the training of LDM on FFHQ. We then re-trained the model with the fixed weights obtained above. The results are presented in **Table C in the Global Response**. For both NashMTL and UW, using fixed weights leads to a slight degradation in FID and Precision Score, which suggests that adaptive weight assignment using NashMTL and UW offers an advantage, albeit slightly in this particular case. It is also notable that despite this slight decrease, using weighted loss with fixed weights still surpasses vanilla training in terms of sample quality. We will include this result in the final manuscript. --- Rebuttal Comment 1.1: Comment: Thanks for your responses. My concerns have been addressed. --- Reply to Comment 1.1.1: Title: Thanks to reviewer 8Ptq Comment: We deeply appreciate your effort in responding to the rebuttal. We are delighted that your concerns have been resolved.
Summary: This work explores the phenomenon of negative transfer in the diffusion training procedure, where different time steps or signal-to-noise ratios may conflict with each other and degrade overall performance. The authors propose a solution to this problem by introducing internal clustering and implementing several multi-task learning (MTL) methods to mitigate the negative transfer effect. The proposed approach is evaluated through experiments on the FFHQ and CelebA datasets, and the results demonstrate its effectiveness in improving performance compared to existing methods. Strengths: 1. It is interesting and novel to approach the training of diffusion models from the perspective of multi-task learning (MTL). By recognizing the potential for negative transfer in the diffusion training procedure, the authors of this work are able to leverage the benefits of MTL to mitigate the negative effects of conflicting time steps or signal-to-noise ratios. 2. The motivation sounds good and the preliminary experiment can demonstrate the negative transfer phenomenon in diffusion models. 3. After addressing negative transfer through internal clustering and MTL methods, the final model achieve good performance on FFHQ and CelebA. 4. The writing is good and easy to follow. Weaknesses: My main concern is that the experiments are only conducted on small datasets like FFHQ and CelebA. Now many standard benchmarks exist. e.g., ImageNet-64, ImageNet-256, CoCo. Experiments on more datasets are needed to prove the effectiveness of the proposed methods. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. It would be beneficial to include the average conflict of the baseline model in Fig 3(a) to provide a clearer comparison. 2. The choice of the number of clusters is an important parameter in the proposed internal clustering approach, and it would be beneficial to conduct further analysis and ablation studies to determine the optimal number of clusters. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful to reviewer 7mPc for providing constructive comments, which are very helpful in improving our work through experimental results. We will address all raised concerns by the reviewer and revise the paper accordingly. --- ### **W1: Experiments are only conducted on small datasets.** Thank you for your valuable suggestion. We agree with your point that our method must be validated on a large-scale dataset. Accordingly, we conducted experiments on ImageNet 256 x 256 for the conditional generation task using the DiT-S model. We opted for a smaller architecture of DiT-S due to resource constraints. We report evaluation metrics for generated samples under varying guidance scales in **Fig. A of the Global Response**. As shown in Fig. A, our method consistently improves upon vanilla training in terms of FID, IS, and Precision scores by a large margin. Specifically, balancing methods such as UW and NashMTL are shown to be very effective under strong guidance. We also provide convergence analysis in **Fig. B of the Global Response**. As shown in Fig. B, we see that applying our method leads to faster convergence and better final performance, across all MTL methods considered. With the additional results on ImageNet, we believe our work has gained stronger empirical results. We will include the detailed experimental setup and results in the final manuscript. --- ### **Q1: Comparison to the average conflict of the baseline in Fig. 3a.** We thank the reviewer for this valuable insight. We report the average conflict of the baseline in **Fig. C of the Global Response**. As can be seen in Fig. C, the average conflict of the baseline does not significantly differ from that of PCgrad (Fig. 3a). This result is in line with the recent findings in [B], which show that MTL methods dealing with conflicting gradients do not actually reduce the *occurrence* of conflicting gradients. We will include this additional information for clarity and completeness in the final manuscript. --- ### **Q2: Ablation studies on the number of clusters.** We apologize for not making explicit references in the main paper to the ablation experiment results in the Appendix. We would like to point the reviewer to Section D.2 of the Appendix, where we perform ablation on the number $k$ of clusters. As shown in the result, our method is robust to the number of clusters and improves upon vanilla diffusion training in all cases considered. Specifically, we see that applying MTL methods with only two clusters ($k=2$) already leads to a significant performance boost, which confirms the effectiveness of our method. For clarity and completeness, we will provide a concise summary of the ablation experiments and make an explicit reference to the results in the main paper. --- ### **References** [A] Peebles et al., “Scalable Diffusion Models with Transformers”, arXiv 2022. [B] Shi, Guangyuan, et al., “Recon: Reducing Conflicting Gradients From the Root For Multi-Task Learning.”, ICLR 2023. --- Rebuttal Comment 1.1: Comment: thank authors for the rebuttal. My concerns are all addressed and I keep my rating. --- Reply to Comment 1.1.1: Title: Thanks to reviewer 7mPc Comment: Thank you for taking the time to respond to the rebuttal. We are glad that your concerns have been resolved.
null
null
Rebuttal 1: Rebuttal: # Global Response Dear reviewers, We sincerely thank you for dedicating time and effort to review our manuscript. In an attached PDF file, we have provided the results of all conducted experiments during the author response period for addressing concerns and questions raised by reviewers. In this response, we offer a concise explanation and setup of experiments in the attached PDF file. --- ## Contents - **[Figure A and B]**: To address concerns raised by reviewers 7mPc & sPtA, we conducted experiments on the large-scale dataset, ImageNet 256x256, with transformer-based diffusion models, DiT-S [A]. All training was performed for 800K iterations with AdamW optimizer with a learning rate of 1e-4 or 2e-5, and better results were reported. Figure A illustrates FID, IS, Precision, and Recall scores according to the classifier-free guidance scale from 1.5 to 3.0. Figure B depicts the trajectory of the FID score over training iterations with the classifier-free guidance scale 2.0. All samples are generated by DDPM 250-step sampler. - **[Figure C]**: To answer question 2 of reviewer 7mPc, we plot the average number of gradient conflicts in baseline training (referred to as ‘vanilla’ in the manuscript). The settings are the same as in Fig. 3a of the manuscript. - **[Table A]**: To address concerns raised by reviewer 8Ptq, a comparison of GPU memory usage and runtime for all methods is presented in Table A. These measurements were conducted using the setting where the number of clusters $k$ was set to 5 on the FFHQ dataset with LDM and timestep-based clustering. - **[Table B]**: For reflecting reviewer 8Ptq’s suggestion, the results of Linear Scalarization (LS) [B] and Random Loss Weighting (RLW) [C] on the FFHQ dataset with ADM architecture are provided in Table B. All settings are the same as in Table 1 of the manuscript. - **[Table C]:** To respond to the question of reviewer 8Ptq, we applied the weighted loss with the mean weight of each interval of UW and NashMTL in Fig. 3b and Fig. 3c and report the result in Table C. All experimental settings are the same in Figure 3 of the manuscript. --- ### References [A] Peebles et al., “Scalable Diffusion Models with Transformers”, arXiv 2022. [B] Xin et al., “Do Current Multi-Task Optimization Methods in Deep Learning Even Help?”, NeurIPS 2022 [C] Lin et al., “A closer look at loss weighting in multi-task learning”, arXiv 2022 Pdf: /pdf/514e374c74e98f4b74ea4a1685980e4e4317d15d.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Neural Frailty Machine: Beyond proportional hazard assumption in neural survival regressions
Accept (poster)
Summary: The paper proposes to parametrize frailty models with neural networks and provide results on learning rates for these models. Experiments demonstrate the efficacy of the proposed models. Strengths: 1. The theoretical analysis is interesting and cool. Weaknesses: 1. The paper states that: "[56] used a neural network to approximate the conditional survival function and could be thus viewed as another trivial extension of NHR." How is this not exactly the same idea but for frailty models, you're proposing a neural parametrization very much akin to [56]. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Everything is evaluated using IBS and IBLL, non-proper scores when censoring is not independent of event-times. Can you prove that the datasets you use indeed exhibit this independence. Can you provide evaluations using the likelihood, a proper score? 2. The objective seems computationally intensive to compute, can you provide computational comparisons with other methods? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for providing insightful comments. Below we address your specific points: ### Q1: On evaluation with proper score metrics Please kindly refer to the first part of the general response for a detailed explanation. In our opinion, while right-censored log-likelihood (CLL) is a proper metric, it is not well defined for many neural models with implicit uses of the nonparametric maximum likelihood (NPMLE) method like DeepSurv and Coxtime as well as models using pseudo likelihood like DeepEH. To provide more intuitions, think of the Breslow 's estimate for the cumulative hazard used in proportional-hazard type models (CoxPH, DeepSurv, RSF...), the estimate is a step function and therefore there are no corresponding estimates for the hazard function and hinders the evaluation of CLL. Breslow's estimate could be regarded a by-product of the NPMLE construction of partial likelihood. Now if we still want to evaluate CLL, the best one can do is to evaluate some approximated version as done in [1], with the approximation itself being not theoretically-principled. This is the reason why we did not incorporate CLL into our evaluation in the first place. For a more thorough evaluation, we provide additional experiments comparing NFM and SuMo-net, which we find is the only baseline method that allows straightforward computation of CLL metric, and NFM is shown to perform slightly better in this metric in comparison to SuMo-net. As shown in Table 1 in the second part of the general response. For completeness we paste it below: | | metabric | gbsg | flchain | support | mimic-iii | kkbox | |---------------|------------------|------------------|------------------|------------------|------------------|------------------| | SuMo-net | -0.256(0.052) | 0.367(0.056) | -1.184(0.023) | -0.673(0.023) | -0.052(0.003) | 0.128(0.004) | | NFM(PF) | -0.184(0.020) | 0.378(0.054) | -1.169(0.022) | -0.622(0.028) | **0.125(0.006)** | 0.647(0.003) | | NFM(PF) | **-0.148(0.027)**| 0.369(0.046) | -1.166(0.026) | **-0.588(0.036)**| -0.026(0.001) | **0.786(0.009)** | Table 1: Comparison of NFM with SuMo-net in the CLL metric ## Q2: On the computational complexity of using Clenshaw-Curtis quadrature Please kindly refer to the second part of the general response for a detailed explanation. We illustrate in Table 2 and Table 3 in the second part of the general response that using $10$ discretization steps in the quadrature method suffices for competitive performance, with the running time comparable to efficient methods like SuMo-net and being much faster than kernel approximation methods like DeepEH. For completeness we paste the results below: |dataset | |NFM(PF) | | |NFM(FN) | |SuMo-net|DeepEH | |--------|----------|----------|----------|----------|----------|----------|--------|--------| | |steps = 10|steps = 20|steps = 50|steps = 10|steps = 20|steps = 50| | | |metabric|9.7s | 12.5s |17.7s |11.4s | 19.4s | 36.1s | 13.58s | 85.6s | Table 2: Running time comparisons of using different number of discretization steps, along with two baselines | | | NFM(PF) | | | NFM(FN) | | |----------|-----------|-----------|-----------|-----------|-----------|-----------| | |cindex |ibs |inbll |cindex |ibs |inbll | |steps = 10|65.16(1.46)|16.28(0.76)|49.02(2.29)|66.82(1.62)|16.03(0.87)|47.96(2.53)| |steps = 20|65.16(1.46)|16.28(0.76)|49.02(2.29)|66.63(1.68)|16.07(0.84)|48.04(2.44)| |steps = 50|65.13(1.46)|16.28(0.76)|49.03(2.29)|66.79(1.49)|16.10(0.83)|48.10(2.43)| Table 3: Performance comparisions of using different number of discretization steps 1] Rindt, David, et al. "Survival regression with proper scoring rules and monotonic neural networks." International Conference on Artificial Intelligence and Statistics. PMLR, 2022. --- Rebuttal Comment 1.1: Title: Re: Rebuttal Comment: Thanks for the response, raising to 5.
Summary: The authors propose a neural architecture to estimate the survival function for observations of survival times and censored survival times. The authors describe a methodology to include heterogeneity among the population by including the frailty component. The authors then theoretically and empirically illustrate properties of their approach. The theoretical results are used to provide additional justification for why a neural network architecture is appropriate for this kind of problem. Empirical results are added to place into context the performance of the proposed methods with respect to existing methods. Strengths: The paper provides a thorough analysis on using neural networks for survival estimation beyond the analyses present in previous works. The authors also consider frailty which they naturally include within their modelling framework, removing some of the homogeneity assumptions of previous works. The empirical results suggest that the method performs well in the considered scenarios. I did not check the details of the proofs, but the theoretical component seems useful and original for predicting convergence rates of the estimator and applicable to other similar estimators. Finally, the writing is clear and easy to follow. Weaknesses: The main weakness involves the empirical evaluation of the paper. None of the empirical gains are major when considering the empirical experiments. This is expected with real world data since it's difficult to estimate the counterfactual (i.e. maybe the survival time could have been much longer/shorter). I would suggest to add an empirical result that compares the methods in idealized settings where the true survival distribution is known and that the metrics can be appropriately compared. Another potential weakness involves the theoretical guarantees. If I'm understanding correctly, the theory applies in the case where the networks are well-learned, which is still subject to the issues associated with neural network optimization. It may be good to emphasize this in the text. Technical Quality: 3 good Clarity: 3 good Questions for Authors: How does the empirical convergence compare with the theoretical? The authors mention using Clenshaw-Curtis as the integrator, how does the parameters of the integrator (e.g. step size) affect the performance of the method? Is there some numerical bottleneck related to this integration? In the theoretical results, the MLPs scale with the number of samples, is there a rigorous connection to the number of parameters in the network and the number of samples? Additionally, was this enforced in the experimental setup, I could not tell if this was the case in the hyper parameter section of the appendix? Does the method extend to representing the joint distribution of multiple survival times? E.g. something like $P(T_1 > t_1, \ldots, T_d > t_d \mid Z)$? Was a nonparametric model for the frailty component examined, i.e. could $f_\theta$ be represented by some generative model constrained to be positive? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors did mention limitations in the appendix. Alongside their notes, it may be good to get a review of where the authors think that their method is appropriate and where other methods would be more appropriate. Additionally, it would be nice to see conditions where the method fails to get a better idea of the properties of the method. This could possibly come from some of the theoretical assumptions being violated (e.g. it’s unclear which ones are really necessary in practice versus for proving the theorems). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for providing insightful comments. Below we address your specific points: ### Q1: Evaluation of idealized settings According to our understanding, if the true surival time is always observed (i.e., no censoring effects), then the problem boils down to an ordinary regression problem and there are many appropriate metrics and learning objectives that allow a formal study of generalization. In our synthetic experiments, the true survival is known and we provided a pictorial illustration in figure 3 at appendix D that assesses the recovery of the true surival function. We also computed the relative integrated mean squared error (RISE) corresponding to different sample sizes. The results are listed in the following table, showing that the goodness of fit becomes better with a larger sample size. | |N=1000|N=5000|N=10000| |-------|------|------|-------| |NFM(PF)|0.0473|0.0145|0.0137 | |NFM(FN)|0.0430|0.0184|0.0165 | Table 1. RISE for the synthetic experiments ### Q2: Theoretical guarantees and optimization issues This is a very good point. Throughout this paper, we rely on the empirical observation that state-of-the-art stochastic optimization algorithms like Adam produce satisfactory results for neural models, and set aside the convergence issue (in the language of optimization). A careful analysis of the optimization landscape of the NFM objective is beyond the scope of the paper, but is a valuable topic and worth future explorations. We will add a discussion in the camera-ready version of the paper. Besides, regarding your comments "..networks are well-learned..", according to our understanding, to place ourselves in a machine learning context the issue is somewhat related to some notion of *generalization* in survival analysis, which is to our knowledge an open problem. In some recently proposed frameworks like [6] it might offer opportunities to study formal generalization in survival analysis, which is a promising future direction. ### Q3: Comparison of empirical and theoretical convergence In this paper, we establish statistical guarantees using the Hellinger distance which is hard to empirically evaluate [1]. Therefore we instead compute a intuitive metric RISE and reported in Table 1. We can see from the results that the goodness of fit becomes better with a larger sample size. ### Q4: On the complexity of using Clenshaw-Curtis quadrature Please kindly refer to the second part of the general response for a detailed explanation. We illustrate in Table 2 and Table 3 in the second part of the general response that using $10$ discretization steps in the quadrature method suffices for competitive performance, with the running time comparable to efficient methods like SuMo-net and being much faster than kernel approximation methods like DeepEH. ### Q5: On the scale of the network versus sample size In our theoretical analysis, we construct sieve spaces as a set of MLPs with depth and number of parameters that grow with the sample size at certain speed. This is actually the definition of sieve method [2] that construct a series of parameter spaces that eventually becomes dense in the target function space (the Holder ball defined in (8) in the paper). The construction is mostly driven by approximation-theoretic type arguments [3] and serves as a guide instead of enforcement for empirical hyperparameter choice. As you have pointed out, there are multiple additional factors that may affect the empirical results including optimization issues and training strategy. In our experiments, the final models are selected using early stopping on validation datasets. ### Q6: On extension to multiple survival times This is another very interesting question. Actually the frailty model is even more appropriate for scenarios where multiple surivival times are present, and we may use the powerful tool of frailty to describe the correlation structure among the individual survival times. Such extension requires carefully modeling the dependence structure [5] and will be left to further studies. See also remark 3.1 in the paper. ### Q7: Is it possible for a nonparametric model of frailty? This is yet another interesting and intriguing question. In general, if we allow the frailty transform to be nonparametric, it becomes statistically hard to separate the effect of the frailty transform and its input argument, as they are both infinite-dimensional and extremely expressive (see definition (3) in the paper). Therefore to study such kind of extensions we may need some extra restrictions on the generating process [5]. That said, we did try this option during our empirical evaluations and find this to perform pretty well empirically (comparable but not significantly surpassing parametric NFM). However as they are very subtle in theory, we chose not to report this line of results. [1] Sreekumar, Sreejith, and Ziv Goldfeld. "Neural estimation of statistical divergences." The Journal of Machine Learning Research 23.1 (2022): 5460-5534. [2] Chen, Xiaohong. "Large sample sieve estimation of semi-nonparametric models." Handbook of econometrics 6 (2007): 5549-5632. [3] Yarotsky, Dmitry. "Error bounds for approximations with deep ReLU networks." Neural Networks 94 (2017): 103-114. [4] Parner, Erik. "Asymptotic theory for the correlated gamma-frailty model." The Annals of Statistics 26.1 (1998): 183-214. [5] Tang, Weijing, et al. "Survival analysis via ordinary differential equations." Journal of the American Statistical Association (2022): 1-16. [6] Han, Xintian, et al. "Inverse-weighted survival games." Advances in neural information processing systems 34 (2021): 2160-2172. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. After seeing the other reviews and the response, I would like to keep my score. However, it would be good to make a statement regarding how the proposed method is more than just a straightforward extension of existing methods (e.g. highlighting the major contributions that make the frailty model more challenging than a simple extension) as well as including additional numerical results demonstrating when the method should perform well. Of course, only so much is possible during the response period, so it would be good to augment the final version of the paper with some of these additional experiments.
Summary: The frailty model is one of the popular models in survival analysis, and it is an extension of the classical Cox proportional hazard model. This paper extends the frailty model by using neural networks, and this paper provides the theoretical analysis of the proposed models. Strengths: + This paper proposes two new models NFM-PF and NFM-PM by combining the frailty model and neural networks. + This paper provides the theoretical analysis of the proposed models to show its correctness. Weaknesses: I think that the (original) frailty model is interesting, simply because it uses slightly weaker assumptions than the classical Cox model. Even though their difference is small, the frailty model can be significantly better than the Cox model in practical prediction performance. This is because the Cox model is essentially a linear model and therefore a slight change (i.e., using frailty) can be a huge differentiator. However, the proposed extensions of frailty models by using neural networks are not so interesting. We already know many neural network models for survival analysis such as DeepSurv [42] and DeepEH [75], which are extensions of the Cox and AFT models. All of the neural network models for survival analysis are flexible enough, and the advantage of using the concept of frailty in neural network models should be minimal. The experiments are not good enough with respect to showing the practical advantage of using frailty in neural network models. Although the proposed models were compared with many existing models, the performance differences seem coming from the differences of the neural network architectures (e.g., the network structures, the number of layers, the number of neurons in each layer, and many other implementation details) rather than the frailty. Since the goal of the experiments is to show the advantage of using frailty in the neural network models, the authors should compare the two neural network models with and without frailty by using (almost) the same neural network architecture. Moreover, the authors should have used the right-censored log-likelihood, which is shown to be a strictly proper scoring rule in [56], as the evaluation metric instead of IBS and IBNLL in the experiments. The statement “Both IBS and INBLL are proper scoring rules” in Line 301 is not true, and it is shown that both IBS and INBLL are not proper in [56]. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I have no question. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: I couldn’t find any description on limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for providing insightful comments. Below we address your specific points: ## Q1: The motivation of frailty and its advantage Firstly, we would like to emphasize that random effect (called frailty), which serves as a principled tool to model *unobserved heterogeneity*, has played an important role in modern survival analysis which is beyond a simple relaxation of proportional hazard assumption. There has been significant and active research concerning the addition of random effect to survival models. The random effect can describe risk or frailty for distinct categories, such as individuals or famlies. There eixst literature on providing theoretical and practical motivation for frailty models by discussing the impact of heterogeneity on analyses. In the influential article [1], Aalen showed that with a sufficiently strong frailty the population relative risk can go from r to 1/r over time, even though the true relative risk stays at r. As you have pointed out, DeepSurv and DeepEH are flexible models compared to CoxPH. However, DeepSurv still encodes the belief of proportional hazard, and frailty models relax this assumption and is strictly more powerful than DeepSurv. Secondly, while the algorithmic extension of incorporating frailty is straightforward, **it is highly nontrivial to establish statistical guarantees**. In this paper the goal is to introduce a rigorous statistical model with provable guarantees, which was lacked in many previous works. As was pointed out in the paper, the current theoretical developments on neural survival models rely on the fact that the loss function is well-controlled by the L2 loss [2, 3], which is not directly applicable to our model due to the flexibility in choosing the frailty transform. **In this paper, we developed a framework that further extends the proof strategy of previous works to more general losses, which we think is a significant contribution to the theoretical developments of neural survival models**. Finally, regarding your comments " the authors should compare the two neural network models with and without frailty by using (almost) the same neural network architecture.", we have provided such comparison in Table 5 in appendix D. The result shows the benefits of frailty in terms of predictive performance. Next we provide a further study on the presence of frailty: While the precise notion of explaining frailty seems to go beyond the scope of our paper, we conducted heuristic experiments to investigate the **strength of frailty effects** in the real-world data used in our paper. The methodology is to use the bootstrap test [4]. In particular, we construct $R=10$ bootstrap samples of each dataset, and then computes the frailty parameter estimates of each bootstrap replicate and report the means and standard deviations of bootstrap estimates. The test allows standard asymptotics if the estimate itself is asymptotically normal which is to be established in our future study. We treat the method as being heuristic. The results are summarized in the following table: | |metabric |gbsg |flchain|support|mimic-iii|kkbox | |---------------|---------|-----|-------|-------|---------|---------| |bootstrap mean |0.650 |0.569|0.678 |1.945 |0.857 |1.391 | |bootstrap stdev|0.017 |0.017|0.070 |0.118 |0.023 |0.066 | |mean/stdev |**38.23**|33.47|9.69 |16.48 |**37.26**|21.08 | Table 1: bootstrap tests against the presence of frailty Here we use (bootstrap mean)/(bootstrap stdev) as a intuitive measure of the presence of frailty effect (the rationale is similar to the Z-score in standard hypothesis testing). According to the table, the effect of frailty is mostly evident in metabric and mimic-iii dataset, which is coherent with the gain of frailty reported in Table 5 in appendix D of the paper. While there are many off-the-shelf neural survival models that are flexible and performative, only very few of them come with formal statistical guarantees (We believe some of them actually are theoretically sound without proofs). And the existing proof strategy has its own limitations. In this paper we make contributions to the theoretical developments via adopting new proof strategies to establish statistical guarantees, and we believe the strategy could be used to derive results for other approaches. ## Q2: On evaluation with proper score metrics Please kindly refer to the first part of the general response for a detailed explanation. First we would like to clarify that the statement in the paper was "Both IBS and INBLL are proper scoring rules if the censoring times and survival times are independent". In our opinion, while right-censored log-likelihood (CLL) is a proper metric, it is not well defined for many neural models with implicitly uses the method of nonparametric maximum likelihood (NPMLE) like DeepSurv and Coxtime, and the best thing can do is to evaluate some approximated version, with the approximation itself being not theoretically-principled. For a more thorough evaluation, we provide additional experiments comparing NFM and SuMo-net, which we find is the only baseline method that allows straightforward computation of CLL metric, and NFM is shown to perform slightly better in this metric in comparison to SuMo-net. As shown in Table 1 in the first part of the global response. [1] Aalen, Odd O. "Heterogeneity in survival analysis." Statistics in medicine 7.11 (1988): 1121-1137. [2]. Zhong, Qixian, Jonas W. Mueller, and Jane-Ling Wang. "Deep extended hazard models for survival analysis." Advances in Neural Information Processing Systems 34 (2021): 15111-15124. [3]. Zhong, Qixian, Jonas Mueller, and Jane-Ling Wang. "Deep learning for the partially linear Cox model." The Annals of Statistics 50.3 (2022): 1348-1375. [4]. Davison, Anthony Christopher, and David Victor Hinkley. Bootstrap methods and their application. No. 1. Cambridge university press, 1997. --- Rebuttal Comment 1.1: Comment: Thank you for the authors' comments. I acknowledge that I have read all of the four reviews and the authors' comments. Having read the comments, I keep my score (i.e., I'm on borderline and I'm inclined to rejection), because I'm not convinced why the proposed methods *significantly* outperform the prior methods in the experiments even though the technical difference between the proposed methods and the prior methods is so small (i.e., using frailty). I would suggest clarifying all the hyper parameters (both for the proposed methods and the prior methods) and other details (e.g., python version) used in your experiments so that researchers can reproduce your experimental results. Note: I'm *not* requesting additional comments/experiments in this rebuttal phase, but I would suggest adding the information in the camera-ready version (if this paper is accepted) or the future manuscript (if this paper is rejected). --- Reply to Comment 1.1.1: Title: Some further clarifications Comment: Thank you for the response. We will incorporate your suggestions in revised versions of the paper. Additionally, we provided experiment details in appendix C.3 of the paper regarding the tuning of hyper-parameters. Moreover, we would like to point out that **we did not make the conclusion that NFM is significantly better than previous SOTA, but mostly comparable and sometimes slightly surpassing** regarding the benchmark datasets which we used. So far as we have noticed, the empirical results on the chosen datasets rarely exhibit large performance gaps: Even compared to the vanilla CoxPH, the current SOTA methods (considering the best performing one) only shows statistically significant advantage (at level 0.05) on 4 out of 6 datasets. Finally, frailty is a flexible framework has many possible parameterizations which we elaborate in appendix A of the paper. Could you tell us a little bit more on why you insist that the difference between frailty formulations and previous approaches is "so small"?
Summary: The authors propose a framework for survival regression called neural frailty machine. They have shown that most of the existing methods are a special case of the proposed framework. Also, they have drawn statistical convergence guarantees for the proposed model. The experiments show marginal improvement compared to existing models in terms of some metrics. Strengths: The paper is well-written and has clear motivation. The overall approach is intuitive and makes sense (although I haven't checked the proofs). The main contributions are introducing frailty variable and the the statistical guarantees drawn for the proposed approaches. Weaknesses: The main weakness is lack of discussion/explanation around the results and the two proposed approaches (PF and FN). More on this in the next section. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What is the intuition behind introducing PF? It is mentioned (in third page, line 127) that "Proportional-style assumption over hazard functions has been shown to be a useful inductive bias in survival analysis". If this type of function approximation has been already shown deployed in survival analysis, then it wouldn't be very novel. - Checking results, sometimes FN performs better, while PF works better other times. Why? Are there any explanations for each dataset? - Is it possible to report numerical results for Figure 1, so we can compare PF and FN? - Addition of frailty variable (w) has marginal effect (<~1%) on most of the dataset (Table 5 in Appendix) while having huge effect on one the datasets. Any explanations for this discrepancy? Overall, I think more discussion around numerical results is needed. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Authors discussed some the limitations in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for providing insightful comments. Below we address your specific points ## Q1: The intuition of proportional frailty model Firstly, we would like to emphasize that random effect (called frailty), which serves as a principled tool to model *unobserved heterogeneity*, has played an important role in modern survival analysis. There has been significant and active research concerning the addition of random effect to survival models. The random effect can describe risk or frailty for distinct categories, such as individuals or famlies. There eixst literature on providing theoretical and practical motivation for frailty models by discussing the impact of heterogeneity on analyses. Below is a quote from [1]: > It is a basic observation of medical statistics that individuals are dissimilar ... Still, there is tendency to regard this variation as a nuisance, and not as something to be considered seriously in its own right. Statisticians are often accused of being more interested in averages, and there is some truth to this. In the influential article [1], Aalen showed that with frailty, the population relative risk can go from r to 1/r over time, even though the true relative risk stays at r. Frailty is shown to be very useful to adjust for heterogeneity in real data. Secondly, while the algorithmic extension of incorporating frailty is straightforward, **it is highly nontrivial to establish statistical guarantees**. In this paper the goal is to introduce a rigorous statistical model with provable guarantees, which was lacked in many previous works. As was pointed out in the paper, the current theoretical developments on neural survival models rely on the fact that the loss function is well-controlled by the L2 loss [2, 3], which is not directly applicable to our model due to the flexibility in choosing the frailty transform. **In this paper, we developed a framework that further extends the proof strategy of previous works to more general losses, which we think is a significant contribution to the theoretical developments of neural survival models**. ## Q2: The benefits of frailty and the implications from empirical evaluations Here we answer the second and fourth question simultaneously. From an approximation point of view, FN is a more general scheme than PF. However it has been shown in recent developments of deep learning that the universality of some model does not necessarily imply its empirical effectiveness, but instead inductive bias matters, which is sometimes hard to formally quantify. At this stage we would like to argue that the additional inductive bias provided by the PF scheme may sometimes show its usefulness on specific datasets. While the precise notion of explaining frailty seems to go beyond the scope of our paper, we conducted heuristic experiments to investigate the **strength of frailty effects** in the real-world data used in our paper. The methodology is to use the bootstrap test [4]. In particular, we construct $R=10$ bootstrap samples of each dataset, and then computes the frailty parameter estimates of each bootstrap replicate and report the means and standard deviations of bootstrap estimates. The test allows standard asymptotics if the estimate itself is asymptotically normal which is to be established in our future study. Therefore we treat the method as being heuristic. The results are summarized in the following table: | |metabric |gbsg |flchain|support|mimic-iii|kkbox | |---------------|---------|-----|-------|-------|---------|---------| |bootstrap mean |0.650 |0.569|0.678 |1.945 |0.857 |1.391 | |bootstrap stdev|0.017 |0.017|0.070 |0.118 |0.023 |0.066 | |mean/stdev |**38.23**|33.47|9.69 |16.48 |**37.26**|21.08 | Table 1: bootstrap tests against the presence of frailty Here we use (bootstrap mean)/(bootstrap stdev) as an intuitive measure of the presence of frailty effect (the rationale is similar to the Z-score in standard hypothesis testing). According to the table, the effect of frailty is mostly evident in metabric and mimic-iii dataset, which is coherent with the gain of frailty reported in Table 5 in appendix D of the paper. ## Q3: Numerical evaluations of the synthetic experiments Following [5], we report the relative integrated mean squared error (RISE) of the estimated survival function against the ground truth and list the results in the following table. The reults suggest that the goodness of fit becomes better with a larger sample size. Moreover, since the true model in the simulation is generated as an PF model, we found PF to perform slightly better than FN, which is reasonable since the inductive bias of PF is more correct in this setup. | |N=1000|N=5000|N=10000| |-------|------|------|-------| |NFM(PF)|0.0473|0.0145|0.0137 | |NFM(FN)|0.0430|0.0184|0.0165 | [1] Aalen, Odd O. "Heterogeneity in survival analysis." Statistics in medicine 7.11 (1988): 1121-1137. [2]. Zhong, Qixian, Jonas W. Mueller, and Jane-Ling Wang. "Deep extended hazard models for survival analysis." Advances in Neural Information Processing Systems 34 (2021): 15111-15124. [3]. Zhong, Qixian, Jonas Mueller, and Jane-Ling Wang. "Deep learning for the partially linear Cox model." The Annals of Statistics 50.3 (2022): 1348-1375. [4]. Davison, Anthony Christopher, and David Victor Hinkley. Bootstrap methods and their application. No. 1. Cambridge university press, 1997. [5]. Zhong, Qixian, Jonas W. Mueller, and Jane-Ling Wang. "Deep extended hazard models for survival analysis." Advances in Neural Information Processing Systems 34 (2021): 15111-15124. --- Rebuttal Comment 1.1: Comment: Thanks authors for the response. Regarding Q2 results, it doesn't seem that the proposed score shows the level frailty (maybe only presence of frailty?). Because, It is not aligned with the table 5 in appendix. MIMIC-III shows significantly higher improvement while having a comparable level of frailty estimate to metabric (or gbsg). --- Reply to Comment 1.1.1: Title: Some further clarifications Comment: Thanks for the response. Perhaps there are some misunderstandings and misinterpretations regarding the results in table 1 in the rebuttal thread. We would like to make a few clarifications: - Speaking in a statistical context, the meaning in the frailty parameter is usually related to the variance of the random effect. In the additional experiments related to those results in table 1, we are primarily interested in assessing the presence of frailty (as you have also pointed out in the response), which is mostly reflected through the surrogate Z-score (third row of table 1). According to table 1, the presence of frailty is mostly significant in metabric and mimic-iii, which is coherent with table 5 in the appendix. Here we would like to re-emphasize that this is merely empirical observations. - As far as we can tell, the mean value of the frailty parameter has no explicit connection with how NFM performs regarding the predictive metrics like Cindex, IBS and INBLL. It might be somewhat misleading of using the term **strength of frailty effects**, which we want to correct here.
Rebuttal 1: Rebuttal: We'd like to thank all the reviewers for providing insightful comments, we will integrate some of the suggestions into the camera-ready version. We have found that there are several common issues raised by different reviewers: - The motivation of frailty and its advantage. - IBS/IBNLL/Cindex are not proper scoring rules in general. - The complexity of using Clenshaw-Curtis quadrature to compute the learning objective. Due to limited space, the first issue is postponed to the response to specific reviewers. Below we provide detailed explanations for the rest issues: ## IBS/IBNLL/Cindex are not proper scoring rules in general. It was pointed out by several reviewers that IBS and IBNLL are improper scoring rules when the censoring time is not marginally independent with the event time. In [1] the authors advocate using censored log-likelihood (CLL) which is a proper metric. We explain below why CLL is not adopted in our paper, and provide empirical evaluations regarding this metric: - **CLL might be not well-defined for a lot of survival models**: Many survival models are semiparametric with a nuisance parameter being assumed to lie in an infinite-dimensional functional space. The construction of the renowned partitial likelihood for the CoxPH model is a representative case of nonparametric maximum likelihood (NPMLE) (see [4] for the construction). It is well known in statistical literature that **without proper restriction, nonparametric likelihood can be unbounded** [3, 5]. In the construction of partial likelihood, the estimate of the cumulative hazard function is restricted to be a step function and there are no proper estimate of hazard function under NPMLE, thereby hindering the evaluation of CLL. However, most current survival baselines more or less uses the rationale of partial likelihood, including RSF, DeepSurv, Coxtime and many others. Besides, it is also tricky to apply CLL to models using pseudo likelihood like DeepEH. In [1], the authors used a finite-difference approximation to compute an approximate version of CLL that was not well-defined for models like DeepSurv or Coxtime. Therefore, we think that despite its properness, the applicability of CLL is limited for a fair comparison. - **A comparison in CLL with SuMo-net**: Although CLL is not a generally applicable metric for survival models as argued above, it is applicable to the NFM framework. Therefore, we provide an additional evaluation of CLL with a comparison to SuMo-net [1], the reults are summarized in table 1 in the attachment pdf. The results suggest that NFM performs on par with SuMo-net, with statistically significantly better results (under $0.05$ significance level) on 4-out-of-6 datasets. ## The complexity of using Clenshaw-Curtis quadrature to compute the learning objective. The only hyperparameter involved in the computation of Clenshaw-Curtis quadrature (CCQ) is the number of discretization steps which are common to many numerical integration methods. Here we provide assessments on this hyperparameter on both computational efficiency (measured in wall clock running time) and the trade-off with model performance via comparing the model performance using different number of discretization steps. All experiments are made on the metabric dataset under the optimally tuned architecture, trained for $50$ epochs using a M1 Max MacBook Pro 2021 in its cpu version. We first report the efficiency evaluation in table 2 from the attachment pdf, where we assess three configurations with $\text{steps} \in \{10, 20, 50 \}$ and compare with SuMo-net as well as DeepEH (with a computationally identical configuration of neural function approximator). From the results we find that - PF is faster than FN and is less affected by increasing the number of integration steps. This stems from the fact that PF decouples the computation of $h$ and $m$, and we only need to evaluate the numerical integral of a one-dimensional function. For the FN scheme, integral of a multi-dimensional function is involved and is thus more time-consuming. - With a proper choice of steps (i.e., less than $20$), both NFM schemes are comparable in efficiency to SuMo-net, which is regarded as an efficient implementation that involves only a single feedforward neural network call. Besides, NFM are much more efficient then DeepEH, since the computation of DeepEH's objective involves kernel approxiamtions and is quadratic in sample size. Overall, **the computational cost of both NFM schemes are controllable for a moderate number of steps**. Next we study the effects on model performance. The experiments are conducted on the metabric dataset and summarized in table 3 from the attachment pdf. We conclude from the table that even using $10$ integration steps suffices for competitive performance, and using a larger number of integration steps seems to have little extra gain. Therefore, we conclude that the additional complexity of using CCQ for evaluation of learning objective is totally affordable, especially when GPUs are deviced for parallel evaluation, the computation time might be further reduced. [1] Rindt, David, et al. "Survival regression with proper scoring rules and monotonic neural networks." International Conference on Artificial Intelligence and Statistics. PMLR, 2022. [2] Bickel, Peter J., et al. Efficient and adaptive estimation for semiparametric models. Vol. 4. Baltimore: Johns Hopkins University Press, 1993. [3] Kiefer, Jack, and Jacob Wolfowitz. "Consistency of the maximum likelihood estimator in the presence of infinitely many incidental parameters." The Annals of Mathematical Statistics (1956): 887-906. [4] Murphy, Susan A., and Aad W. Van der Vaart. "On profile likelihood." Journal of the American Statistical Association 95.450 (2000): 449-465. [5] Zeng, Donglin, and D. Y. Lin. "Efficient estimation for the accelerated failure time model." Journal of the American Statistical Association 102.480 (2007): 1387-1396. Pdf: /pdf/91689ba84dcb0d812bc06fa87bedfef0117535b3.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
GLIME: General, Stable and Local LIME Explanation
Accept (spotlight)
Summary: This work is embedded in the research on model-agnostic explanations, i.e., to provide the user an understanding on the outputs of otherwise black-box prediction methods without knowing about the model's internals. While LIME is a popular approach to solve this problem, prior work has demonstrated LIME to suffer from a strong dependence on the random seed, leading to instability and inconsistency. Since reliable, model-agnostic explanations will be a crucial tool for research and application alike to afford the use of otherwise black-box machine learning models, this paper is tackling an important issue considering LIME's popularity yet evident short-comings. GLIME is presented as a step towards more general, stable and consistent model-explanations. Due to the free choice of its sampling distribution and weights, it is shown how GLIME not only improves on LIME but generalizes over other methods. Strengths: 1.The method GLIME is presented very clearly. Not only are text and equations supporting the reader in understanding the method well, but its motivation as successor of LIME in terms of stability and local fidelity are easy to follow and well justified by both the presented related work as well as this paper's own evaluation. 2. The unification of various model-explanation methods not only gives the reader an overview of how these methods relate to each other but shows well how GLIME is not only succeeding at stability and local fidelity but also a more general framework then LIME. Weaknesses: 1. This work is strongly focused on comparing GLIME and its variants to LIME. While the relation of LIME and GLIME are made clear and well supported by the experiments, a more comprehensive overview on the field of explanation methods other than LIME could help the reader to better understand how GLIME fits into current research. Similarly, a discussion of GLIME's short-comings and an outline of Future Work would reinforce the contribution. Along the same line, a discussion on GLIME's quality as model-explainer and human-interpretability of the achieved results would greatly support the claims. 2. While the figures present the concepts and results of this paper quite well, they could benefit from some additional attention and polishing. For example, Fig 1a misses an explanation of the employed colormap. Fig. 1b shows GLIME-Gauss as blue dot in the legend but not the graphic itself. In Fig. 4a, the legend occupies important parts of the plot such that GLIME-Binomial and GLIME-Gauss curves are hard to see. 3. The use of inline-math can at times be overwhelming, e.g., in Theorem 4.2. While it is important to state all the relevant equations and relations, reverting to display- rather than inline-math for the key concepts might help the reader to better digest the underlying theory and assumptions. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. What are GLIME's short-comings and what are plans to improve on the method in the future? 2. Has it been evaluated if there is a difference in how human-interpretable GLIME's and LIME's explanations are? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Overall I would say it's a technically sound paper, but for a model-explanation paper I am missing the human-side a bit. My understanding is that the improvements the work shows are only meaningful if the consistent/stable explanations are also still good explanations, which I think is not really discussed here. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer jFSL for reviewing our paper and for the insightful comments. We hope our answers will address your concerns. > **Q1:** What are GLIME's short-comings and what are plans to improve on the method in the future? **A1:** From our perspective, the following may be promising for future research: * Parameter selection: It would be worthwhile to explore algorithms that can adaptively determine the optimal locality parameter $\sigma$ for each input $\mathbf{x}$, allowing for more flexible and context-specific explanations. * Feature correlation: Considering feature correlation in the explanation process could enhance the accuracy and interpretability of the results. Developing algorithms that incorporate feature correlation would be a promising avenue for future research. > **Q2:** Has it been evaluated if there is a difference in how human-interpretable GLIME's and LIME's explanations are? My understanding is that the improvements the work shows are only meaningful if the consistent/stable explanations are also still good explanations, which I think is not really discussed here. **A2:** We acknowledge the absence of a comparison between LIME and GLIME in terms of human interpretability in our study. We understand the significance of evaluating the interpretability of these methods and will carefully consider how to conduct such a comparison in future research. LIME, being one of the most widely utilized explanation methods, relies on linear model approximation to explain model behavior. Previous studies on LIME (e.g., ALIME [1], DLIME [2], S-LIME [3]) have primarily focused on addressing its limitations, particularly instability issues, rather than conducting human evaluation experiments. Instead, alternative evaluation metrics have been emphasized. Based on our current findings, we maintain the belief that GLIME outperforms LIME. A prominent concern with LIME is its high sensitivity to the choice of random seed. Figure 1(a) clearly demonstrates that different random seeds can yield entirely disparate explanation results. In contrast, GLIME consistently produces outputs irrespective of the seed, as indicated by the Jaccard Index. Explanations that are heavily influenced by random seeds may undermine their reliability and significance. Consequently, we argue that GLIME provides more meaningful explanations in comparison to LIME. Nonetheless, we recognize the necessity of conducting experiments to compare these methods in terms of human interpretability. We have plans to conduct relevant experiments in the future to address this aspect. Due to time constraints, we are currently unable to present experimental results, but we assure you that they will be included in the final version of our study. > **Q3:** A more comprehensive overview on the field of explanation methods other than LIME could help the reader to better understand how GLIME fits into current research. **A3:** Our main focus is to explore the limitations of LIME and how our proposed method, GLIME, addresses those limitations. Therefore, in the main body of the text, we primarily discuss LIME and similar explanation methods. There are many commonly used methods (e.g., SHAP, Anchor, SmoothGrad) that differ significantly from LIME, which we did not mention. However, we will provide a more detailed introduction to some commonly used explanation methods in the appendix, so that readers can have a clearer understanding of the differences between GLIME and those methods, as well as the applicability of GLIME. > **Q4:** Fig 1a misses an explanation of the employed colormap. Fig. 1b shows GLIME-Gauss as blue dot in the legend but not the graphic itself. In Fig. 4a, the legend occupies important parts of the plot such that GLIME-Binomial and GLIME-Gauss curves are hard to see. The use of inline-math can at times be overwhelming, e.g., in Theorem 4.2. **A4:** Thank you for bringing these issues to our attention. We appreciate your feedback and will make the necessary changes to address them. We will also conduct a thorough review and revision of other sections in accordance with your suggestions. [1] Sharath et al., ALIME: Autoencoder Based Approach for Local Interpretability [2] Muhammad et al., DLIME: A Deterministic Local Interpretable Model-Agnostic Explanations Approach for Computer-Aided Diagnosis Systems [3] Zhengze et al., S-LIME: Stabilized-LIME for Model Explanation --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: I would like to thank the authors for their response. Although I still feel that this paper feels incomplete without the human angle, I will not fight for rejection due to this. Overall, other concerns have been satisfactorily addressed and thus I will raise my score to 5. --- Reply to Comment 1.1.1: Title: Human-interpretability experiment results Comment: Thank you for your feedback and suggestions. We have incorporated human-interpretability experiments as you recommended. Our experimental design is as follows: * Our experiment consists of two parts: 1. Selecting images where the model's predictions are correct and presenting the original images along with explanations from LIME and GLIME to the participants. The participants are asked to rate the degree of matching between the explanations provided by the algorithms and their intuitive understanding on a scale of 1-5, where 1 indicates a significant mismatch and 5 indicates a strong match. 2. Selecting images where the model's predictions are incorrect and presenting the original images along with explanations from LIME and GLIME to the participants. The participants are asked to rate the degree of help provided by the explanations in understanding the model's behavior and identifying the reasons for the incorrect predictions on a scale of 1-5, where 1 indicates no help at all and 5 indicates significant help. * Each part consists of ten randomly selected images. * The participants in the experiment are college students from diverse backgrounds with no prior knowledge of machine learning. There are ten participants for each part, and the participants for the two parts are different. Here are the results of the experiment: * When participants were shown images where the model's predictions were correct, along with explanations from LIME and GLIME, they gave an average score of 2.96 to LIME and an average score of 3.37 to GLIME. Overall, GLIME had an average score 0.41 higher than LIME. For seven out of the ten images, GLIME had a higher average score than LIME. We performed a t-test on the scores of LIME and GLIME, resulting in a t-value of -1.36 and a p-value of 0.29. * When participants were shown images where the model's predictions were incorrect, along with explanations from LIME and GLIME, they gave an average score of 2.33 to LIME and an average score of 3.42 to GLIME. GLIME had an average score 1.09 higher than LIME for all ten images. We performed a t-test on the scores of LIME and GLIME, resulting in a t-value of -8.75 and a p-value of $1.08\times 10^{-5}$. The results of the second part of the experiment indicate that GLIME is more effective in helping people understand the model's behavior and debug the model. Although the results of the first part of the experiment are not statistically significant, they still reflect the relative advantage of GLIME over LIME to some extent. We will conduct more experiments to obtain statistically significant results and include results in the final version. We hope that our current experimental results address your concerns.
Summary: This paper proposes a new explanation method, GLIME, which provides more general, stable and local LIME explanations over ML models. Specifically, the authors demonstrate that small sample weights cause the instability of LIME, which results in dominance of regularization slow convergence and worse local fidelity.To address those issues, the authors proposed GLIME framework, which takes a slightly different form of LIME. Through rigorous theoretical analysis and some experiments, the authors can demonstrate that GLIME addressed the above issues and outperformed LIME. Strengths: + The authors addressed a very important problem, i.e., the well-known instability issue of LIME, and proposed an effective solution to address it. + The authors conducted a rigorous theoretical analysis to support their claims, which is very convincing. + The overall presentation is very clear and easy to follow. Weaknesses: + The experiments are only conducted on one dataset, i.e., ImageNet dataset. It would be better if the authors could show more results on more benchmark datasets + Some properties that are studied in theory for GLIME are not empirically verified. For example, in Section 4.1, the authors showed that their method can converge faster than LIME. Although they provide clear proof for it, the authors did not demonstrate it in experiments. So it would be better if some empirical experiments can cover this. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: + How about the experimental results of using GLIME on other datasets? + How about the empirical comparison between GLIME and LIME in terms of the convergence speed? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer sX7G for reviewing our paper and for the insightful comments. We hope our answers will address your concerns. > **Q1:** The experiments are only conducted on one dataset, i.e., ImageNet dataset. It would be better if the authors could show more results on more benchmark datasets. **A1:** We have also conducted experiments on text data, utilizing the DistilBERT model. We select 100 data points from the IMDb dataset as inputs for explanation. In our experiments, we compare the performance of GLIME-Binomial and LIME, and the Jaccard Index results are presented in the table below. Our findings indicate that GLIME-Binomial exhibits significantly higher stability than LIME across various values of $\sigma$ and sample sizes. Particularly, when $\sigma$ is small, GLIME-Binomial demonstrates a substantial improvement in stability compared to LIME. GLIME-B: GLIME-Binomial; | # samples | $\sigma=0.25$ | | $\sigma=0.5$ | | $\sigma=1$ | | $\sigma=5$ | | | --------- | ------------- | ----- | ------------ | ----- | ---------- | ----- | ---------- | ----- | | | GLIME-B | LIME | GLIME-B | LIME | GLIME-B | LIME | GLIME-B | LIME | | 258 | 0.533 | 0.406 | 0.533 | 0.407 | 0.533 | 0.421 | 0.533 | 0.509 | | 512 | 0.607 | 0.414 | 0.607 | 0.417 | 0.607 | 0.441 | 0.607 | 0.567 | | 1024 | 0.679 | 0.431 | 0.683 | 0.437 | 0.683 | 0.480 | 0.683 | 0.637 | The $R^2$ values for GLIME and LIME are presented in the table below. The sample size is fixed at 1024. The results indicate that GLIME exhibits superior local fidelity compared to LIME, particularly when $\sigma$ is small. GLIME-B: GLIME-Binomial; | | $\sigma=0.25$ | | $\sigma=0.5$ | | $\sigma=1$ | | $\sigma=5$ | | | ----- | ------------- | ----- | ------------ | ----- | ---------- | ----- | ---------- | ----- | | | GLIME-B | LIME | GLIME-B | LIME | GLIME-B | LIME | GLIME-B | LIME | | $R^2$ | 0.688 | 0.001 | 0.691 | 0.160 | 0.693 | 0.579 | 0.693 | 0.682 | We will conduct more experiments in the future and incorporate the results into the final version. > **Q2:** Some properties that are studied in theory for GLIME are not empirically verified. For example, in Section 4.1, the authors showed that their method can converge faster than LIME. Although they provide clear proof for it, the authors did not demonstrate it in experiments. So it would be better if some empirical experiments can cover this. **A2:** The comparison is presented in Figure 4(a). In Figure 4(a), we present the stability of GLIME and LIME with respect to random seeds and parameters. GLIME-Binomial, which is equivalent to LIME, is much more stable than LIME, especially when $\sigma$ is small. Under default setting where $\sigma=0.25$, GLIME requires 256 sample to have top-20 Jaccard Index $\approx 0.9$ while with 2048 samples, LIME only has top-20 Jaccard Index$\approx 0.7$. This is an empirical evidence that GLIME converges much faster than LIME. Please refer to Section 5.1 and Figure 4(a) for more details. --- Rebuttal Comment 1.1: Title: Thanks for the authors' efforts Comment: I think the authors addressed my concerns. I also read other reviewers' comments and I think overall the responses look good to me. So I raised my score. But I would love to communicate with other reviewers if any other significant issues are raised. --- Reply to Comment 1.1.1: Comment: Thank you for your response and encouraging feedback. We are glad that we are able to help address your concerns.
Summary: In this paper, the authors present GLIME an approach for explainable ai that generalizes LIME. Here, the authors present a framework that encompasses different explainability methods as instantiations of different aspects such as loss function, sampling function, etc. The authors also present an analysis of problematic cases for LIME. More precisely, they show how the interaction of the weighting and regularization can cause instability in the explanations and how the samples generated in LIME might not be close to the original sampling space. The paper then presents different sampling procedures and show empirically how they converge and how stable the explanations are given different parameterizations. Strengths: I find the paper insightful, in particular the aspect of the weights becoming all zeros in the standard case for low values of sigma. The paper is easy to read, technically sound and guides the reader through the concepts in a solid yet understandable way. The technical contributions are solid and overall provides a good foundation for further research. Overall the paper is original and I would say significant as it has the potential to become the standard replacement for LIME. Weaknesses: The main concern I have is regarding the empirical section. In particular, you mention two main issues with LIME being the interaction of the weighting with regularization and sub-par sampling. However, it would seem like ALIME does not suffer from those two issues. It would be good to see a comparison of GLIME and ALIME in Fig. 4. There are some minor improvements I would suggest on the presentation. I would suggest you unify the color scheme in Fig. 4 and if possible present as many of the methods in both graphs. Is there a typo in the norm of the weighting function in line 171? shouldn't it be 2 and 2? The language on the sub-section in Feature attributions could be improved. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: My main concern is regarding the contribution and the comparison to ALIME. Could you mention a bit more in terms of your contributions regarding that method? This does not diminish your theoretical contributions in terms of the convergence in comparison to LIME, however it can potentially make GLIME less appealing to the general user than ALIME. Regarding your GLIME-gauss, and I believe ALIME is similar, wouldn't the sampling space be too close to the original image we want to explain? It seems like simply a noise addition similar to difussion models. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: I don't consider the paper to have potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer DXPw for reviewing our paper and for the insightful comments. We hope our answers will address your concerns. > **Q1:** You mention two main issues with LIME being the interaction of the weighting with regularization and sub-par sampling. However, it would seem like ALIME does not suffer from those two issues. It would be good to see a comparison of GLIME and ALIME in Fig. 4. **A1:** Although ALIME improves stability and local fidelity over LIME, GLIME still outperforms ALIME. One major difference between ALIME and LIME is that ALIME uses an encoder to encode samples into embedding space and compute their distance with the explained input in embedding space $\\\|\mathcal{AE}(\mathbf{z}) - \mathcal{AE}(\mathbf{x})\\\|_2$ while LIME uses a binary vector $\mathbf{z} \in \\\{0,1\\\}^d$ to represent a sample, and use $\\\|\mathbf{1} - \mathbf{z}\\\|_2$ as the distance between the sample and the explained input. ALIME use distance in embedding space to weight samples. Therefore, if the samples generated by ALIME is distant from $\mathbf{x}$, sample weights may still be very small and cause instability problem. We conduct experiments to compare GLIME and ALIME. We utilize the VGG16 model provided by the repository https://github.com/Horizon2333/imagenet-autoencoder as the encoder in ALIME for our experiments on ImageNet. The table below shows the results of the experiments. It can be observed that the improvement of ALIME is still not as significant as that of GLIME, especially when $\sigma$ is small or the sample size is small. We will include results of ALIME in Figure 4 in the final version. GLIME-B: GLIME-Binomial; GLIME-G: GLIME-Gauss | # samples | $\sigma=0.25$ | | | $\sigma=0.5$ | | | $\sigma=1$ | | | $\sigma=5$ | | | | --------- | ------------- | ------- | ----- | ------------ | ------- | ----- | ---------- | ------- | ----- | ---------- | ------- | ----- | | | GLIME-B | GLIME-G | ALIME | GLIME-B | GLIME-G | ALIME | GLIME-B | GLIME-G | ALIME | GLIME-B | GLIME-G | ALIME | | 128 | 0.952 | 0.872 | 0.618 | 0.596 | 0.875 | 0.525 | 0.533 | 0.883 | 0.519 | 0.493 | 0.865 | 0.489 | | 256 | 0.981 | 0.885 | 0.691 | 0.688 | 0.891 | 0.588 | 0.602 | 0.894 | 0.567 | 0.545 | 0.883 | 0.539 | | 512 | 0.993 | 0.898 | 0.750 | 0.739 | 0.904 | 0.641 | 0.676 | 0.908 | 0.615 | 0.605 | 0.898 | 0.589 | | 1024 | 0.998 | 0.911 | 0.803 | 0.772 | 0.912 | 0.688 | 0.725 | 0.915 | 0.660 | 0.661 | 0.910 | 0.640 | > **Q2:** Could you mention a bit more in terms of your contributions regarding that method? Regarding your GLIME-gauss, and I believe ALIME is similar, wouldn't the sampling space be too close to the original image we want to explain? It seems like simply a noise addition similar to diffusion models. **A2:** Both GLIME-Gauss and ALIME utilize Gaussian sampling and employ Ridge regression to solve linear explanations. However, there are notable differences between the two methods. * Firstly, ALIME initially trains an auto-encoder and then uses this auto-encoder to compute embeddings for each sample. On the other hand, GLIME does not require the training of an auto-encoder or any calculation of embeddings. Consequently, GLIME is significantly more efficient than ALIME. * Secondly, in ALIME, sample weights are computed based on the distance between the embeddings of the samples and the embedding of the original input, denoted as $\mathbf{x}$. These sample weights are then used in a weighted Ridge regression to obtain explanations. Conversely, GLIME does not involve the use of sample weights. In ALIME, sample weights emphasize locality, whereas GLIME enforces locality by employing a sample distribution that more frequently samples samples closer to $\mathbf{x}$. * Based on the comparison of the results between GLIME and ALIME mentioned above, although ALIME has improved stability compared to LIME, GLIME shows even better stability than ALIME, especially when the value of sigma is small and the sample size is small. Additionally, GLIME-Gauss introduces Gaussian noise to super-pixels, while diffusion models add noise to pixels. The distance between samples and the original image can be controlled by adjusting the parameter $\sigma$. In Figure 3, we showcase the distances of samples from the original image, with larger $\sigma$ resulting in samples being farther from the desired $\mathbf{x}$. > **Q3:** I would suggest you unify the color scheme in Fig. 4 and if possible present as many of the methods in both graphs. **A3:** Thanks for your suggestions. In the final version, we will unify Figure 4(a) and Figure 4(b). However, it is important to clarify that these figures are intended to compare different aspects of LIME and GLIME. Figure 4(a) aims to demonstrate how GLIME improves stability compared to LIME, while also illustrating the impact of regularization and weighting on the stability of LIME. To provide a comprehensive comparison, we will include LIME with regularization parameter $\lambda$ set to 0 (LIME$+\lambda=0$), LIME with $\lambda = e^{-d/\sigma^2}$ (LIME$+\lambda = e^{-d/\sigma^2}$), and LIME with sample weight function $\pi$ set to 1 (LIME$+\pi=1$). On the other hand, Figure 4(b) focus on comparing the local fidelity of LIME and GLIME, specifically by including different sample distributions used in GLIME. > **Q4:** Is there a typo in the norm of the weighting function in line 171? shouldn't it be 2 and 2? The language on the sub-section in Feature attributions could be improved. **A4:** Thank you for providing the correction. The accurate expression should be $\pi(\mathbf{z^\prime}) = \exp(-\\\|\mathbf{1} - \mathbf{z}^\prime\\\|_2^2 /\sigma^2)$. We will make sure to revise the language in the final version for further improvement. --- Rebuttal Comment 1.1: Comment: I'm satisfied with the rebuttal provided by the authors and the discussion. Therefore I'm raising my score. --- Reply to Comment 1.1.1: Comment: Thank you for your response and encouraging feedback. We are glad that you are satisfied with our response.
Summary: The paper introduces GLIME as a solution to tackle the issues of instability and diminished local fidelity encountered in the original LIME method. To address the problem of instability, GLIME employs a novel sampling scheme that guaranteed to have a faster sampling rate. The diminished local fidelity problem is resolved by modifying sampling distribution so that nearby samples have higher probability to be sampled. Disclaimer: I only read the main text and do not check correctness of the proposed sample complexity argument. Strengths: 1. The problem they tackle with is specific and well-formulated. The proposed solution is simple and effective. 2. Their methods are supported by sample complexity analysis. This analysis not only provides mathematical evidence that the original LIME approach necessitates an exponentially large number of samples to achieve convergence, but also demonstrates that their proposed method requires only a polynomial number of samples, offering a significant improvement in efficiency. Weaknesses: One weakness would be limited applicability of the proposed GLIME. The paper only demonstrates it can only be applied to the image domain. As other features from different domains, such as texts or categorical features, are not necessarily to be continuous, GLIME equipped with continuous distributions may not resolve the local fidelity issue. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: In Section 4.3 of the paper, it is stated that the local fidelity problem arises due to the utilization of a high regularization weight. Could this issue be addressed by reducing the regularization weight or, in more extreme cases, completely eliminating the regularization. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The potential societal impact is not stated in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer NTaD for reviewing our paper and for the insightful comments. We hope our answers will address your concerns. > **Q1:** One weakness would be limited applicability of the proposed GLIME. The paper only demonstrates it can only be applied to the image domain. As other features from different domains, such as texts or categorical features, are not necessarily to be continuous, GLIME equipped with continuous distributions may not resolve the local fidelity issue. **A1:** We have conducted experiments on text data. We use the DistilBERT model and select 100 data points from the IMDb dataset as inputs to be explained. We run experiments comparing GLIME-Binomial and LIME, and the results are shown in the table below. The sample size is set to 1024. From the results, it can be seen that GLIME has better local fidelity compared to LIME, especially when $\sigma$ is small. GLIME-B: GLIME-Binomial; | | $\sigma=0.25$ | | $\sigma=0.5$ | | $\sigma=1$ | | $\sigma=5$ | | | ----- | ------------- | ----- | ------------ | ----- | ---------- | ----- | ---------- | ----- | | | GLIME-B | LIME | GLIME-B | LIME | GLIME-B | LIME | GLIME-B | LIME | | $R^2$ | 0.688 | 0.001 | 0.691 | 0.160 | 0.693 | 0.579 | 0.693 | 0.682 | For data with discontinuous feature values, the samples $\mathbf{z^\prime}$s naturally tend to be far away from the input being explained. However, because of the large distances, the weights of each sample after weighting are very low. This may cause the sum-of-squares term to be dominated by the regularization term, leading to explanations tending towards zero and resulting in a smaller $R^2$ and poorer local fidelity. GLIME does not suffer from this problem, as the sum-of-squares term is not dominated by the regularization term, leading to better local fidelity. We will conduct additional experiments in the future to compare GLIME and LIME. > **Q2:** In Section 4.3 of the paper, it is stated that the local fidelity problem arises due to the utilization of a high regularization weight. Could this issue be addressed by reducing the regularization weight or, in more extreme cases, completely eliminating the regularization. **A2:** In our paper, we propose that the instability issue in LIME is attributable to the use of a high regularization weight and low sample weights. To address this concern, we conducted experiments aimed at evaluating whether reducing the regularization parameter $\lambda$ or increasing the sample weight function $\pi(\cdot)$ could improve stability. For a comprehensive analysis of the results and discussion, please refer to Figure 4(a) and lines 278-289. It is important to note that while removing regularization does offer some improvement, it is not as significant as the improvement achieved by GLIME. This is because, in the absence of regularization, the LIME solution involves computing $(Z^\top WZ)^{-1}$, where $W$ is a diagonal matrix with sample weights on its diagonal. The dependence on $W$ makes LIME sensitive to sample weights. Furthermore, since the sample weights are nearly zero, $Z^\top WZ$ becomes low-rank, resulting in numerical instability. Additionally, increasing the sample weights to 1 only provides limited improvement in stability for LIME. This is because the large sampling space of LIME causes the obtained explanations to highly depend on the selected samples. GLIME, on the other hand, does not encounter these issues as its solution does not require inverting a low-rank matrix, and it operates within a local sampling space. > **Q3:** The potential societal impact is not stated in the main paper. **A3:** The positive impact of GLIME is its ability to enhance the interpretability of machine learning, making the use of machine learning in various fields safer and more trustworthy. It reduces the potential for social security incidents caused by machine learning. It can help users better understand the behavior of machine learning models. The potential negative societal impacts of GLIME are not yet clear at the moment. --- Rebuttal Comment 1.1: Title: Thanks for the author's reply Comment: I believe the authors have taken into account the points I raised, so I raised my score to 7. Additionally, I've gone through feedback from other reviewers, and on the whole, I find the authors' responses satisfactory. --- Reply to Comment 1.1.1: Comment: Thanks for your reply and encouraging feedback. We are glad to see that you are satisfied with our responses.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
An Efficient Tester-Learner for Halfspaces
Reject
Summary: The paper introduces an efficient algorithm for learning halfspaces in the testable learning model in which the tester-learner first applies a test on the training data and if the test succeeds the algorithm produces a hypothesis which is guaranteed to be near-optimal. It is required that if the data comes from a target distribution, then the test must succeed with high probability. The paper considers learning halfspaces in the case where the target distribution is Gaussian (or strongly log-concave) and where the labels are subjected to Massart noise or adversarial noise (i.e., agnostic setting). The paper builds on several ideas from previous papers by Diakonikolas et al. Strengths: Learning halfspaces is a fundamental problem in machine learning. Even though it is one of the simplest tasks, the problem becomes non-trivial in the presence of label noise. Recently, there has been a lot of interest in developing algorithms for learning halfspaces in several settings (e.g., Massart noise, ...). The submitted paper is the first work that presents an efficient algorithm for learning halfspaces in the testable learning model of Rubinfeld and Vasilyan. I find the paper to be clear and generally well-written, and I find the results novel and interesting. Weaknesses: Minor comments/typos: - Page 2 line 71: "an important one being that the probability mass of any region close to the origin is proportional to its geometric measure" -> "an important one being that the probability mass of any region close to the origin is roughly proportional to its geometric measure". - Page 5, line 198: Shouldn't the title of the section be "Testing properties of isotropic strongly log-concave distributions"? - Page 5, line 214: "and runs and in time poly(...)" -> "and runs in time poly(...)" - Page 5, line 235: There should be a comma between \tau and \delta. - Page 7, line 307: "Each of the failure events will have probability at least $\delta'$ " -> "Each of the failure events will have probability at most $\delta'$ ". - Page 8, line 322: "under theempirical distribution" -> "under the empirical distribution". The following relevant references seem to be missing: - [1] Sitan Chen, Frederic Koehler, Ankur Moitra, Morris Yau, "Classification Under Misspecification: Halfspaces, Generalized Linear Models, and Connections to Evolvability", NeurIPS 2020. - [2] Rajai Nasser, Stefan Tiegel, "Optimal SQ Lower Bounds for Learning Halfspaces with Massart Noise", COLT 2022. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Minor questions: - Page 5: In Algorithm 1, shouldn't T1 also run with $\delta'$ ? Optional suggestion: - Did the authors consider the Tsybakov noise model? It is harder than the Massart noise but easier than the agnostic setting, and hence it might be possible to get an efficient tester-learner that achieves the information-theoretically optimal error $\text{opt}+\epsilon$ for halfspaces with Tsybakov noise (of course, under structured marginal distributions such as Gaussian). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: No concerns regarding potential societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the anonymous reviewer for their constructive comments and suggestions, as well as for alerting us to a number of typos (including one within Algorithm 1) and relevant references. We also thank the reviewer for pointing to an interesting open direction regarding whether our techniques would also provide information-theoretically optimal results for the Tsybakov noise model. We have not closely considered this model so far, but agree that it would be interesting. --- Rebuttal Comment 1.1: Title: Reply to rebuttal by authors Comment: Thank you very much for your reply. My assessment of the paper remains positive.
Summary: This papers gives an efficient algorithm for learning halfspaces under the testable learning framework of Rubinfeld and Vasilyan (STOC'23) and facing either Massart or agnostic noise. In this setting, the algorithm is given some reference marginal distribution $D^*$ (which is assumed to be isotropic and strongly log-concave), and it may choose to "reject" (asserting that the actual marginal is different from $D^*$) instead of outputting a hypothesis. Naturally, the learner needs to satisfy the following two conditions: - Completeness: When the marginal is indeed $D^*$, the learner does not reject w.h.p. - Soundness: The probability that the learner outputs an insufficiently accurate hypothesis is low. Here, sufficient accuracy means achieving either $\mathsf{opt} + \epsilon$ (under Massart noise) or $\tilde O(\mathsf{opt}) + \epsilon$ error (in the agnostic setting), where $\mathsf{opt}$ is the loss of the optimal halfspace and $\epsilon > 0$ is a parameter. The solution is built upon the nonconvex optimization approach to learning halfspaces under noise in the literature. The key property for this approach to succeed is that when some appropriate loss function is minimized, all the stationary points are reasonably close to the true parameter. The crux of the current work is then to identify certain testable properties of the marginal under which the above argument goes through, so that we either get a good learning guarantee, or obtain a witness for that the marginal is not $D^*$. Strengths: The paper studies a fundamental learning theory problem (i.e., learning halfspaces) in the newly introduced testable learning setup. The results are strong and comprehensive, and the solution is nontrivial and requires several novel ideas. The paper is very nicely written and well-strutured, and the main paper contains sufficient details (including the "technical overview" section) for the reader to appreciate the high-level ideas behind the work. Despite the few weaknesses discussed below, I found the paper a strong submission that should be accepted. Weaknesses: - The hypothesis class is restricted to homogeneous halfspaces (without a bias). - In the agnostic case, the error bound can be higher than the optimal error by a constant factor. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Echoing the weaknesses part, what are the technical hurdles that prevent the current approach from handling non-homogeneous halfspaces and achieving the "$\mathsf{opt} + \epsilon$"-type error guarantee? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: This is a theory paper and its limitations are in the assumptions made by the problem setting as well as the main results, e.g., the restriction to halfspaces, the noise model, and that a single reference marginal distribution is provided. These are clearly stated in the paper as well as the separate "Limitations and Future Work" section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We wish to thank the anonymous reviewer for their constructive feedback and for appreciating our results! The problem of designing efficient tester-learners for non-homogeneous halfspaces is an interesting open question. Our approach does not immediately apply to this problem, because we crucially make use of the geometric properties of strongly log-concave distributions around the origin and for that we require the bias to be zero. However, prior work in the distribution specific setting [1], has proposed efficient learning algorithms and it is conceivable that their ideas could be relevant to the testable learning framework. In the agnostic setting, achieving $\mathrm{opt}+\epsilon$ in fully polynomial time has been shown to be impossible in the statistical query framework (see [2], [3]) or under cryptographic assumptions (see [4], [5]) even in the distribution specific setting (i.e., assuming the marginal to be the standard Gaussian). Therefore, prior work on the distribution specific setting has focused on constant factor approximation algorithms (as well as approximation schemes). The specific technical reason for which our techniques fail to achieve the optimal guarantee is described in lines 166-171 of our submission and amounts to the amplification of the estimation error of the optimal weight vector (e.g., according to Proposition 4.4). In the agnostic setting, the estimation error cannot be chosen to be arbitrarily small, but is instead proportional to $\mathrm{opt}$ (see Lemma 5.2). [1] Diakonikolas, I., Kontonis, V., Tzamos, C. & Zarifis, N.. (2022). Learning General Halfspaces with Adversarial Label Noise via Online Gradient Descent. ICML 2022. [2] Diakonikolas, I., Kane, D.M., & Zarifis, N. (2020). Near-Optimal SQ Lower Bounds for Agnostically Learning Halfspaces and ReLUs under Gaussian Marginals. NeurIPS 2020. [3] Goel, S., Gollakota, A., & Klivans, A.R. (2020). Statistical-Query Lower Bounds via Functional Gradients. NeurIPS 2020. [4] Diakonikolas, I., Kane D.M., & Ren, L, (2023). Near-Optimal Cryptographic Hardness of Agnostically Learning Halfspaces and ReLU Regression under Gaussian Marginals. ICML 2023. [5] Tiegel, S. (2023). Hardness of Agnostically Learning Halfspaces from Worst-Case Lattice Problems. COLT 2023. --- Rebuttal Comment 1.1: Title: Thank you for the reply! Comment: I want to thank the authors for the detailed answers. My evaluation of the paper remains positive.
Summary: Learning halfspace is a very important problem in machine learning which has been studied extensively. However, generally some distributional assumptions like gaussianity are assumed which in general is difficult to verify. To address this issue, recently Rubinfeld and Vasilyan (STOC 23) have introduced Testable learning framework. Here the primary objective is that if the tester accepts, then the output of the learner is close to OPT + \epsilon (OPT being the optimal error), and when it satisfies the distributional assumptions, the algorithm accepts with high probability. However, when the Gaussian distributional assumption is taken (let's denote this as D^*), it takes $d^{1/\epsilon^2}$ samples, which is also tight. Thus often researchers are interested to design algorithms that have better complexity with respect to $1/\epsilon$, but the error becomes $f(OPT) + \epsilon$ for some function f. In this work, the authors first design a tester when $D^*$ is isotropic log-concave distribution and the labels are corrupted according to Massart noise (the labels are changed by an adversary with a small probability $\eta$). Their algorithm runs in polynomial time and has error $OPT + \epsilon$ (Theorem 4.1). Later they design testers for adversarial noise with respect to Gaussian distribution with error $O(OPT) + \epsilon$ (Theorem 1.2). In Section 4, the authors study the case with Massart noise. The primary idea here is to minimize a non-convex smooth surrogate loss (4.1) such that its stationary points correspond to halfspaces with small error. The authors first run PSGD on this surrogate loss function to get a set of vectors L such that one vector in L is close to the optimal weight vector. Then they apply localization ideas based upon the region that is an axis-aligned rectangle T, and check if the low degree moments of input distribution D conditioned on T match with D^* conditioned on T. This will ensure that the angular distance of a stationary point w is close to the optimal w^* (Lemma 4.3). To convert closeness in angular distance to closeness in 0-1 loss, they use the fact that the distribution is isotropic strongly log-concave (Proposition 4.4). Later in Section 5, the authors study the agnostic setting where they will call the algorithm from Section 4 several times, each time with different parameters. The idea is that in the agnostic setting, running the algorithm for only once, the algorithm might only consider points that lie within a region with small probability. This finally gives a tester with error $O(OPT) + \epsilon$ when $D^*$ is isotropic log-concave distribution as well as Gaussian (Theorem 5.1 and Theorem 5.3). Strengths: The paper gives the first algorithm for testable learning of halfspaces that runs in poly(d, epsilon). The algorithm is very nice. With the complexity pulled down drastically, a proper implementation and experimental results for this algorithm would be possible and it would be nice to see the relevance of the concept of testable learning in various applications. Also the algorithm can handle both adversarial and massard noise. Weaknesses: The usefulness of the testable learning model in real life applications is yet to be understood. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can this approach be used to design tester-learners for function classes other than halfspaces. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: It is a pure theoretical work in the paradigm of testable learning - a relatively new concept whose importance is not yet fully confirmed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the anonymous reviewer for their feedback and comments. While our results are indeed of theoretical nature, we view the testable learning framework as an important step towards bridging theory with practice, since it removes a significant part of the modeling assumptions typically required to achieve provable guarantees for agnostic learning algorithms (i.e., we obtain provable guarantees that do not require any assumptions about the marginal distribution!). Further, the framework provides a new and theoretically well-founded way for a learning algorithm to say “I do / do not know,” which is a a problem of great relevance for reliable machine learning in general. We view the specific problem considered in this paper, namely attaining $O(\mathrm{opt})$ tester-learners for halfspaces, as one important step towards building the foundations of this framework. Designing tester-learners for other function classes (e.g., neurons with different activations) is an interesting open problem. In particular, for the problems of ReLU and of sigmoid regression, there are efficient algorithms that work in the distribution specific setting, but whether they can be extended to the testable learning framework remains open. --- Rebuttal Comment 1.1: Comment: I have read the reply. Thank you for the detailed response.
Summary: This work provides an efficient algorithm for testably learning halfspaces, extending the frontier of the recently introduced testable learning which does not assume anything about the given data distribution. Specificially, the setting is as follows: the target distribution is standard Gaussian (or any fixed strongly log-concave distribution) and the label noise is Massart or adversarial (agnostic). The main result is two-fold. 1. For Massart noise, if the target distribution is strongly log-concave, the paper proves an algorithmic guarantee that testably learns halfspaces up to $opt + \epsilon$ error and runs in $poly(d,1/\epsilon,1/(1-2\eta), \log(1/\delta)$ time. 2. For agnostic learning, if the target distribution is strongly log-concave, the guarantee is that the algorithm testably learns halfspaces up to $O(k^{1/2} opt^{k/(k+1)} + \epsilon$ error and runs in "roughly" $poly(d^k,1/\epsilon^k, \log^k(1/\delta)$ time (ignoring some logarithmic factors). One can strengthen this result if the target distribution is standard Gaussian. Then the algorithm testably learns halfspaces up to $O(opt) + \epsilon$ error and runs in $poly(d,1/\epsilon, \log(1/\delta)$ time, a result that matches previous non-testable learning results for halfspaces. The methodology borrows two algorithmic ideas and strengthens them. One is the algorithmic idea that runs (nonconvex) SGD on a convex surrogate (ramp function) for the 0-1 loss. Originally, given some distributional assumption, this approach would yield a hypothesis found from an approximate stationary point. In this paper, the authors check if such property is satisfied for the unknown distribution of the testable learning setting, leading to develop a three-stage testing procedure for strongly log-concave distributions. Here, the second algorithmic idea of moment matching kicks in. For the band $T$ s.t. $|\langle w,x \rangle| \le \sigma$, tests are ran on the empirical distribution conditioned on $T$ to check moments matching those of the target distribution on $T$. Strengths: - This work extends upon the recently introduced testable learning in one of the fundamental problems of learning, i.e., learning halfspaces. I believe the topic is of good importance as testable learning yields more practicality to learning algorithms. In that sense, the work studies and provides strong algorithmic result to an important problem. - The techniques used in the paper utilize two previous algorithmic ideas (convex surrogate SGD + moment-matching to fool) and neatly tie these two ideas together to a testable algorithm. The algorithmic techniques are also practical. - The tests are ran on the distribution conditioned on the band $T$. With this "trick", the paper manages to change weak additive guarantees to strong multiplicative ones. - The results are technically strong and presentation is clear. - The agnostic learning algorithm only modifies the Massrt one slightly, which is neat, though this may be more of contribution of previous work than that of this work. Weaknesses: No notable weaknesses, but refer to Questions for a potential undecided one. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - For agnostic testable learning for strongly log-concave distributions, the guarantees are weaker. Is there any inherent difficulty (e.g., lower bounds or conjectured hardnesses) that may explain the weaker guarantees? What is the intuitive reason why Massart and agnostic differ in results? - (Typo) L201 "stated here as Proposition A.3": Should this be 3.1 instead of A.3? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: No limtation addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We wish to thank the anonymous reviewer for their feedback and for appreciating our work! Regarding the reviewer’s question, prior work (e.g., [1], [2], [3], [4]) has provided evidence (in terms of statistical-query or cryptographic lower bounds) that achieving $\mathrm{opt}+\epsilon$ for the problem of agnostically learning halfspaces, even assuming Gaussian marginals, is hard (i.e., requires exponential dependence on $1/\epsilon$). As a result, recent prior work on the distribution specific setting has focused on providing efficient constant factor approximation algorithms (and polynomial time approximation schemes). From the perspective of the techniques we use, the reason that we get qualitatively different results for the agnostic case than in the Massart noise case is as follows (see also lines 166-171): 1. By finding a stationary point of the surrogate loss, we estimate the optimal weight vector up to some error. 2. The error of our weight vector estimate is amplified (according to Proposition 4.4, where $\theta$ is the error of our weight vector estimate). 3. In the Massart noise case, we can make $\theta$ arbitrarily small by using a polynomial number of resources (see Lemma 4.3), while in the agnostic noise case, $\theta$ is proportional to $\mathrm{opt}$ (see Lemma 5.2). 4. Therefore, in the final error, the amplified estimation error can be made arbitrarily small in the Massart noise case, but contains a function of $\mathrm{opt}$ in the agnostic noise case. We also note that, in the agnostic case, we obtain $O(\mathrm{opt})$ only for Gaussian target marginals for technical reasons and we leave the extension of such a result to broader families of target marginals for future work (see also lines 88-93). [1] Diakonikolas, I., Kane, D.M., & Zarifis, N. (2020). Near-Optimal SQ Lower Bounds for Agnostically Learning Halfspaces and ReLUs under Gaussian Marginals. NeurIPS 2020. [2] Goel, S., Gollakota, A., & Klivans, A.R. (2020). Statistical-Query Lower Bounds via Functional Gradients. NeurIPS 2020. [3] Diakonikolas, I., Kane D.M., & Ren, L. (2023). Near-Optimal Cryptographic Hardness of Agnostically Learning Halfspaces and ReLU Regression under Gaussian Marginals. ICML 2023. [4] Tiegel, S. (2023). Hardness of Agnostically Learning Halfspaces from Worst-Case Lattice Problems. COLT 2023. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: I have read the reply. Thank you for the detailed response.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Posthoc privacy guarantees for collaborative inference with modified Propose-Test-Release
Accept (poster)
Summary: The paper proposed to protect data privacy in collaborative inference settings. Specifically, the paper presents a formal way to capture the obfuscation of an image using adversarial representation learning, a well-studied technique. The paper does so by using a metric-based differential privacy notion. Since the metric is difficult to compute, the paper proposes a way to estimate Lipschitz constant using an estimation technique from DP. Strengths: + The studied problem is important + The paper is easy to follow + Formal privacy guarantees Weaknesses: - Threat model is unrealistic - Lack of evaluation against advanced attacks - More clarifications are needed - Experimental result is not convincing Technical Quality: 2 fair Clarity: 3 good Questions for Authors: My big concern is that the threat model requires knowing the classifier (by the obfuscator/defender) which makes it unable to protect all attacks and impracticable in many cases. For instance, the data owner has a stringent requirement about his data privacy, e.g., not letting the defender knows the learning task. In other words, when knowing the classifier, why would the data owner send his data for classification to the server in the first place? “most of these works analyze specific obfuscation techniques and lack formal privacy definitions” This claim seems inaccurate. For instance, Zhao et al. 2020a indeed showed formal privacy guaranteed information leakage. What are the key differences? How do you compare your theoretical results with theirs? Table 1 shows that ARL-C-N is even worse than Encoder in several cases. Why does this happen? How can you claim that ARL-C-N performs the best among all baselines? Two recent works [a,b] propose advanced attacks against ``representation learning” based defense. I would like to see the performance of the proposed ARL-C-N . [a] Song et al., Overlearning reveals sensitive attributes. In ICLR, 2020 [b] Balunovic et al., Bayesian framework for gradient leakage. ICLR’22 Balle et al. [c] also showed theoretical results on data reconstruction via DP technique. What are the key technical differences? [c] Balle et al., “Reconstructing training data with informed adversaries,” in IEEE SP, 2022. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback. We respectfully disagree with several points mentioned by the reviewer and have tried our best to address those by quoting them inline. First, we would like to clarify “collaborative inference” as studied in several existing papers[1-5] including ours. In collaborative inference, an embedding from an intermediate layer is shared by a client to a server and several existing works have proposed (informally private) obfuscation techniques to prevent these embeddings from input reconstruction attacks. Our work presents a framework to give a formal privacy guarantee for these obfuscation techniques. **My big concern is that the threat model requires knowing the classifier (by the obfuscator/defender)** The threat model does not require the defender to know the classifier. In fact, threat models (including ours) do not impose any restriction on defenders, they only focus on the capability of the adversary. **why would the data owner send his data for classification to the server in the first place?** There isn’t any strict requirement for the data owner to know the classifier. It is the training of the obfuscator that requires access to the classifier. However, such training can be performed by the service provider or a third party. We would like to emphasize that our work only focuses on inference and not training. A typical use-case for collaborative inference is that a cloud-based service provider company has a model which they want their users to query with their private data. However, the user can not run it on their own device (either due to the model size or the proprietary nature of the model). Therefore, the cloud-based service provider gives the end user an obfuscator so that it can share its obfuscated data with the service provider. **How do you compare your theoretical results with Zhao et al.2020a?** Zhao et al. 2020a study attribute obfuscation. In contrast, we study reconstruction attacks over the input data. We highlighted this important distinction in line 73 by stating - > A majority of the CI techniques protect either a sensitive attribute or reconstruction of the input. We only consider sensitive input in this work. In contrast, quoting Zhao et al. on Page 3, Lines 3 and 4 - > To simplify the exposition, we mainly discuss the setting where $X \subseteq R^d$, $Y = A = \{0, 1\}$, but the underlying theory and methodology could easily be extended to the categorical case as well. Therefore, they study binary (and categorical) sensitive attributes. Hence, Attribute Inference Advantage (their privacy definition) can not be extended to reconstruction privacy in the input space of data. **Table 1 shows that ARL-C-N is even worse than Encoder in several cases. Why does this happen? How can you claim that ARL-C-N performs the best among all baselines?** ARL-C-N is generally better than other techniques (best performing for 8 out of 12 combinations of datasets and privacy budgets). However, we agree this statement should be made more precisely. The “Informal” column is just for reference and shouldn’t be considered formally private. Considering that, the encoder is only higher than ARL-C-N for $\epsilon=10$ (except $\epsilon=5$ in UTKFace). This is due to the high local sensitivity of the Encoder (bad for privacy-utility trade-off) that gets discounted when the epsilon value is high. **Two recent works [a,b] propose advanced attacks against ``representation learning” based defense. I would like to see the performance of the proposed ARL-C-N.** We would like to clarify that ARL-C-N is not our proposed technique and is simply yet another defense mechanism. We have evaluated three popular classes of collaborative inference techniques – ARL[2,3,4], C[5], and N[6]. ARL-C-N is an ablation where all three regularizations are active. We have ablated all 2^3 combinations in the experiments and ARL-C-N is one of them. Song et al. attack focuses on inferring sensitive attributes while our attack is about reconstructing the input data from its embedding during inference. However, both attacks use supervised learning to train an attacker, and hence the training algorithm for the attacker is the same as the one we employed in Table 2 of the supplementary. While Balunovic et al. attack representation learning techniques, their attacks rely on access to gradients because they focus on Federated Learning. In contrast, gradients are not involved at all in our context because we focus on Collaborative Inference. **Balle et al. [c] also showed theoretical results on data reconstruction via the DP technique.** Balle et al. are also out of scope from our line of work. Their focus is on reconstructing training data by accessing the parameters of a trained model. Quoting the first line of their abstract - > Given access to a machine learning model, can an adversary reconstruct the model’s training data? In contrast, we study the problem of reconstruction attacks over input data during the inference (not training) stage. References 1. Yang, Mengda, et al. "Measuring Data Reconstruction Defenses in Collaborative Inference Systems." Advances in Neural Information Processing Systems 35 (2022): 12855-12867. 2. Singh, Abhishek, et al. "Disco: Dynamic and invariant sensitive channel obfuscation for deep neural networks." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021. 3. Xiao, Taihong, et al. "Adversarial learning of privacy-preserving and task-oriented representations." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 34. No. 07. 2020. 4. Osia, Seyed Ali, et al. "A hybrid deep learning architecture for privacy-preserving mobile analytics." IEEE Internet of Things Journal 7.5 (2020): 4505-4518. 5. Mireshghallah, Fatemehsadat, et al. "Not all features are equal: Discovering essential features for preserving prediction privacy." Proceedings of the Web Conference 2021. 2021. --- Rebuttal Comment 1.1: Title: Response to Authors' Rebuttal Comment: Thanks for the response. However, several of my main comments are not fully addressed. **Threat model: We would like to emphasize that our work only focuses on inference and not training** The goal of the considered data reconstruction defense should be that the encoding cannot be used to infer the private data, while the encoding itself should maintain high utility. The authors said they focus on inference and not training, but the defender needs to perform obfuscator training. I am confused what the authors are exactly trying to protect? **Balle et al. are also out of scope from our line of work…. In contrast, we study the problem of reconstruction attacks over input data during the inference (not training) stage.** Balle et al. 2022 focus on reconstructing training data from informal adversaries who know all the training data except one. The adversaries are also assumed to know the released model and thus know all the intermediate representations. I do not know why the authors cannot compare their theoretical results with Balle et al.’s? Note that Theorem 4.4 and Theorem 2 in Balle et al. also look similar. --- Reply to Comment 1.1.1: Title: Response to reviewer's comment Comment: We thank the reviewer for responding. We will keep the response short as we now understand the point of confusion. We are trying to protect the leakage of inference (test) samples that are only used during the prediction stage. Our work is agnostic to how the obfuscator gets trained because training data is not private. As a concrete example - consider a single image $x$, obfuscator computes $z=f(\theta,x)$ where $f(\theta,\cdot)$ refers to a neural network trained on dataset $D$. This $z$ is shared with an untrusted party. Our framework only protects the reconstruction of $x$ from $z$. We do not study the leakage of $D$ from $z$. Please let us know if it doesn't address the concerns and we can expand further on the comments.
Summary: A method is proposed to encode inputs to a server that performs inference, to limit the ability of the service provider to infer detailed information about the input. A formal privacy guarantee, inspired by differential privacy, quantifies the amount of leakage. This metric is related to the local Lipschitz constant, which relation is used in the test of a propose-test-release mechanism to ensure local privacy. Strengths: The concepts introduced are original as far as I know and the exposition is mostly clear. Weaknesses: The primary weakness in my opinion is in the practical utility of the proposed method. If I understand correctly, the method does not provide local differential privacy (indeed the authors argue this is impossible for a system designed to return useful classification or functions of the input) but merely prevents the inference of detailed information. Specifically, it attempts to obfuscate any information that is not necessary for the task. It seems to me much of the time this would not guarantee "privacy" in the sense that most people would want. Suppose for example that we are working with linguistic input. Maybe it is a question answering service, or machine translation. Users of a purportedly private system would expect that the service would not have access to (not be able to examine or log) the specific question being answered, or the text being translated. Clearly that is incompatible with providing sufficient utility-- something like holomorphic encryption or trusted execution environments would be needed for that. So I think the paper would be stronger if it were motivated with some concrete applications where a user would reasonably be satisfied with the specific type of privacy afforded by this approach. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What are some concrete applications where a user would reasonably be satisfied with the specific type of privacy afforded by this approach? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Limitations are discussed. No potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback. We agree that tasks which require granular information from data such as translation would not be possible with collaborative inference approaches. However, plenty of use cases exist where all the information is not necessarily required to produce an answer. These uses-cases typically produce a significantly smaller dimensional output for high dimensional input. Examples include - - Image, Graphs, or Text classification tasks - A significant amount of information can be thrown away while still being able to compute the answer. - Vector databases for language model - With the improved performance of language models, several use cases for vector databases have arisen such as document retrieval, product recommendation, etc. For example - Authors of [1] tried to apply privacy to classification tasks by using traditional local DP, and hence, they had to resort to extremely large epsilons (around 200) to achieve a reasonable utility. We believe such a technique could benefit from the privacy definition proposed in our work. **References** - 1. Timour Igamberdiev and Ivan Habernal. 2023. DP-BART for Privatized Text Rewriting under Local Differential Privacy. In Findings of the Association for Computational Linguistics: ACL 2023, pages 13914–13934, Toronto, Canada. Association for Computational Linguistics. --- Rebuttal Comment 1.1: Title: Acknowledgement of rebuttal Comment: Thanks for answering my question. I will maintain my rating.
Summary: This paper proposes a method for evaluating the privacy guarantees provided through adversarial representation learning. More precisely, the framework proposed is built on the Propose-Test-Release paradigm (PTR) as well as the d_x-privacy metric. The primary objective is to be able to characterize the privacy risks in terms of reconstruction against the adversarial representation at inference time. Strengths: The paper is well-written and the authors have a done job at explaining the limits of previous works on assessing the security of adversarial representation learning. However, the definition of the Lipschitz constant should also be part of the main body of the paper rather than being in the supplementary material. The move from differential privacy to d_x-privacy enables to make the estimation of the local sensitivity tractable, which circumvents some of the limits of the PTR framework. The proposed approach is tested on three diverse datasets and the results obtained demonstrate that it is comparable in performance to other methods from the state-of-the-art. Weaknesses: There are several possible way to implement a LDP mechanism. I suggest to the authors to mention which one they have implemented (for instance is it randomized response) and also to try a different one to see if the results they obtained are dependent or not of the particular mechanisms evaluated. Even if the focus of the current approach is to prevent reconstruction attack, I suggest to also perform additional experiments to measure the success of membership inference attacks. This would help to fully characterize the privacy-utility trade-off of the proposed method. A few typos : -« hamming distance » -> « Hamming distance » -« laplace mechanism » -> « Laplace mechanism » -The following sentence should be completed : « as we show in Sec ?? » Technical Quality: 3 good Clarity: 3 good Questions for Authors: -It would be helpful to understand the limits of differential privacy if the authors could discuss in a bit more details the impossibility results of instance encoding (reference 7). -Please see also the weaknesses section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Ok. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback and address the concerns as follows - **LDP mechanisms and limits of differential privacy** - We note that our LDP experiment is only for the purpose of illustrating the gap in utility. The protection by LDP is so high that it is unreasonable to expect any utility (as we discuss below) in the context of Collaborative Inference (CI). Having said that, our LDP implementation is based on the Laplace mechanism with clipping. In order to bound the sensitivity of a sample we clip each dimension within a fixed range and add Laplace noise to the sample based on its sensitivity (clipping range and dimensions). A majority of the LDP mechanisms (such as randomized response, CountSketch, etc.) are designed for aggregation queries like count, mean, etc. However, in CI we only have a single sample over which post-processing (classifier) is applied and the answer is returned to the user. This also leads to the impossibility of applying DP directly in this context as DP would require the obfuscated embedding from any two samples to be indistinguishable (up to $e^\epsilon$) while the task of classification requires samples belonging to different classes to be distinguishable. A stronger version of this claim is given in instance encoding that says a dataset of such obfuscated embeddings can either have high utility or high privacy but not both. We address this impossibility by changing the metric space from hamming (as traditionally done in DP) to a general metric and using an l-norm ball around the input such that any two samples within that l-norm ball are indistinguishable. For example - Igamberdiev and Habernal apply LDP for Privatized Text Rewriting – a classification task, and hence, they had to use extremely large epsilons (around 200) to achieve a reasonable utility. We will make sure to include a more detailed discussion in the extra page of the camera-ready version. **Comparison with membership attacks** - We believe a membership attack is not possible because aggregate statistics are not shared. Specifically, a membership inference attack would require access to an aggregate statistic (such as mean or a ML model) and the attacker would have to guess whether a given sample was used to compute that statistic. In contrast, a low-dimension projection of a private input is shared. In that view, there isn’t any “model” to query for membership inference. **References** - Timour Igamberdiev and Ivan Habernal. 2023. DP-BART for Privatized Text Rewriting under Local Differential Privacy. In Findings of the Association for Computational Linguistics: ACL 2023, pages 13914–13934, Toronto, Canada. Association for Computational Linguistics. --- Rebuttal Comment 1.1: Title: Thanks for your response Comment: With respect to LDP, while it is true that usually it is used to randomize profiles of individuals and then compute aggregate information it can still be view as an obfuscated version of the original profile and could be used also as as input to receive a personalized prediction. It would be great if the authors could look at existing LDP mechanisms to see if some of them could possibly acts as an obfuscation in a similar manner as collaborative inference. For membership inference attacks, as the obfuscator is learnt through a training process, I still think it makes sense to evaluate whether such an attack is possible by querying the obfuscator. In particular, it is possible that there is a difference of behaviour between member and non-member with respect to the obsfucation that makes it possible to conduct a membership inference attack. --- Reply to Comment 1.1.1: Title: Thanks for your comments Comment: We thank the reviewer for the response. - We agree with the reviewer that other LDP mechanisms can be also applied similarly to how we have baselined with the truncated Laplace mechanism. - Following the taxonomy discussed in Wang et al, we believe perturbations performed for frequency and mean estimation (typically by applying binary/unary encoding, sketching, or hashing of data) are not directly applicable. - However, projection-based mechanisms typically used for estimating k-way marginals of high-dimensional data can be applied. We will include these perturbation mechanisms in addition to the truncated Laplace mechanism for our baselines in Supplementary Table 1. - While the obfuscator is learned through a training process, that training data is not private. Nevertheless, an interesting analysis would be to observe if these obfuscator models trained for privatizing inference data can also demonstrate different behavior for member and non-member training data points. Since it is orthogonal to the focus of our work and limited time, we could not do such an analysis during the rebuttal phase but we will include it in the final manuscript. References 1. Wang, T., Zhang, X., Feng, J., & Yang, X. (2020). A comprehensive survey on local differential privacy toward data statistics and analysis. Sensors, 20(24), 7030.
Summary: This paper is concerned with the development of privacy-preserving collaborative inference. A collaborative inference framework allows users to create a privacy-preserving encoding of their data, which in turn can be used in place of real private inputs when interacting with machine learning-based services. The promise of privacy-preserving collaborative inference is that an adversary observing an encoding should not be able to recover the original input or a sensitive attribute of the original input. The authors propose a new framework to formally evaluate the privacy guarantees of such an encoding against reconstruction attacks that seek to recover the original input. Strengths: - The paper made several technical improvements to render the proposed solution useful in practice - The proposed solution has been evaluated extensively Weaknesses: - Lack of discussion on the following items: - applicability of the framework to tabular datasets - guarantees of the proposed approach if the adversary has white box access to the obfuscator - The paper focuses on collaborative inference as a solution for input privacy at inference time. The related work on secure inference could benefit from discussions on other approaches such as secure multi-party computation and homomorphic encryption. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please provide more discussion on the following elements - applicability to tabular data - the robustness of the framework to adversaries that have white-box access to the obfuscator - comparison of CI-based approaches to other secure inference techniques relying on secure multiparty computation and homomorphic encryption Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for providing the feedback. We address the three main discussion points brought up by the reviewer below - **Application to tabular data** - Our framework is agnostic to the data modality. From the point of view of privacy, the input needs to be a vector so that it can be privatized by an obfuscator. From a utility point of view, the obfuscation model should have low sensitivity. **The robustness of the framework to adversaries with white-box access to the obfuscator** - Our threat model assumes the adversary has white-box access to the obfuscator as discussed in Sec 3.1 and hence the privacy guarantees hold equally well. **Comparison between CI and secure inference techniques** - Thanks for pointing this out. We will update our Related Work section with a more detailed discussion and references to secure inference techniques. Secure inference[1,2,3,4] provides a high level of privacy (that is every bit in the input data is almost completely indistinguishable to an attacker from a randomly generated bit). However, as the computation performed by the service provider gets more sophisticated, it comes at the expense of higher computation and communication costs. In contrast, CI is computationally cheaper and does not incur any extra communication costs. However, CI offers less privacy as the input sample is only indistinguishable in a neighborhood around the input. CI is also more flexible because secure inference techniques need infrastructure implemented by the service provider, but obfuscation methods are user-controlled. Furthermore, shared embeddings in CI are in the plaintext space and hence amenable to additional analysis such as downstream training as performed in Split Learning and Vertical Federated Learning. **References** - 1. Wagh, Sameer, Divya Gupta, and Nishanth Chandran. "Securenn: Efficient and private neural network training." Cryptology ePrint Archive (2018). 2. Lou, Qian, et al. "Falcon: Fast spectral inference on encrypted data." Advances in Neural Information Processing Systems 33 (2020): 2364-2374. 3. Reagen, B., Choi, W.S., Ko, Y., Lee, V.T., Lee, H.H.S., Wei, G.Y. and Brooks, D., 2021, February. Cheetah: Optimizing and accelerating homomorphic encryption for private inference. In 2021 IEEE International Symposium on High-Performance Computer Architecture (HPCA) (pp. 26-39). IEEE. 4. Lee et al., 2022, June. Low-Complexity Deep Convolutional Neural Networks on Fully Homomorphic Encryption Using Multiplexed Parallel Convolutions. In International Conference on Machine Learning (pp. 12403-12422). PMLR. --- Rebuttal Comment 1.1: Comment: Thank you for your clarifications. I will keep my original score.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
HotBEV: Hardware-oriented Transformer-based Multi-View 3D Detector for BEV Perception
Accept (poster)
Summary: This submission introduces a carefully-crafted transformer model family (models with varying speed-accuracy trade-off) for 3D detection from multi-view camera data. The adopted model design methodology prioritises hardware-efficiency, customising the proposed architecture to the target GPU. For this purpose, an analytical performance model is developed and queried to estimate the inference latency and guide different design choices. Additionally, following a module-level benchmarking, targeted architectural changes are introduced to alleviate the main identified computational bottlenecks in the backbone, normalisation and fusion layer design. The proposed model is trained in the form of a super-network, from which targeted sub-models can be extracted, offering a speed-accuracy trade-off that dominates the pareto frontier between the examined baselines. Strengths: - The aim of the paper to design efficient 3D detection models that are able to achieve realistic latency gains, rather than theoretical workload reduction, is very important and well-motivated. - Additionally, offering a design methodology that is able to customise the model design to the target computation platform facilitates adaptability and is of great benefit to the community. - The results of the conducted benchmarking are insightful and offer detailed information for improving inference efficiency on similar tasks. - The proposed approach dominates the speed-accuracy optimality frontier on 3D detection from BEV data, compared to widely adopted baselines. Weaknesses: - The design choice to focus on multi-frame models instead of those adopting 3D representations is not sufficiently justified, nor backed by a corresponding discussion back by experimental evidence. Particularly since this design choice restricts the re-use of the proposed approach to data from other sensors, such as LiDAR. - The examined GPU devices are on the power-hungry end of the spectrum. For self-driving scenarios, it also makes sense to consider embedded GPU devices too. -The writing of the paper becomes a bit dense and hard to follow in certain sections of the methodology, leaving some unclear points. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Please provide further discussion/results on how the proposed methods speed and accuracy compares with 3D reconstruction based BEV detectors. - How would the proposed methodology perform when targeting embedded GPUs, such as AGX Xavier or Orin. - The proposed latency modelling seems to study memory and computation in isolation. In the case of memory-bounded layers, does the proposed model capture the latency-penalty of PEs who are stalling due to data starvation (e.g. similar to Roofline model analysis) ? - Would an oracle/benchmarking-based approach, considering the actual latency of each architecture result in different design choices to those taken considering the estimation of the developed model ? Typos: - In Table 3 the HW properties of GTX2080 and GTX1080 appear to be identical. Is this correct ? - Line 213: [3] -> Figure [3] Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The manuscript briefly discusses some limitations of the current approach, mostly in the form of future/active work undertaken by the authors. A more detailed discussion of the assumptions and corresponding limitations of the approach would add value to the manuscript. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **ReW1:** Thank you for this question to give us a chance to clarify our motivations. Currently, the pre-fusion family, such as BevFusion, is very expensive to deploy in practice. The current in-vehicle market is dominated by camera-only solutions, such as Tesla, or post-fusion of camera and lidar signals, such as XPeng Motors, whose camera part still deploys camera-only solutions [1]. Hence, our research goal is to explore the efficient and effective camera-only 3D detection series for practical implementation. So we choose the state-of-the-art frameworks, multi-frame models as the starting point of our research. Moreover, according to [2], the camera processing modules occupy 33% ~ 42% of the whole latency distribution for the post-fusion system. Our hardware-oriented design with enhanced image feature representation can be leveraged in the camera encoder part of the current in-vehicle detection system to improve on-device efficiency. **ReW2:** Thanks for your valuable suggestions. It is necessary to test the speed on an actual commercial chip. As shown in Table E, we test our HotBEV models on Orin to validate our framework. Before testing, we quantized our model into INT8 with a tensorRT engine. And then run the test 50 times for each model in order to obtain stable results. Table E. Results on Orin. |Methods | Backbone | Resolution| NDS↑ |mAP↑ | FPS| |------------|-----|-----|-----|-----|-----| |HotBEV |HOB-nano |512x1408 |0.47 |0.385 |31.8| |HotBEV |HOB-tiny |512x1408 |0.512 |0.407 |20.4| |HotBEV |HOB-base |512x1408 |0.525 |0.427 |16.1| **ReW3:** Thanks for your valuable suggestions. We will put more related works and experiments in the appendix part. We will focus on methodology in the main body of our final version. **ReQ1:** Thank you for guiding us to make a more comprehensive comparison with other branch methods. Table F shows that 3D reconstruction-based BEV detectors are better than our methods by up to 19.8 NDS but with inferior on-device speed than ours by up to 19.4 FPS. Even though the detection performance of image-only detectors cannot be superior to Lidar-only methods, these results also provide us motivation to implement our design into one of the commercial mainstream, post-fusion systems to improve hardware efficiency, as mentioned in W1. Table F. Comparison with 3D reconstruction-based BEV detectors. |Methods | NDS↑ |mAP↑ | FPS| |------------|-----|-----|-----| |PointPillars |61.3 |52.3 |29| |SECOND |63 |52.6 |14.3| |CenterPoint |66.8 |59.6 |12.4| |HotBEV-nano |47 |38.5 |31.8| |HotBEV-tiny |51.2 |40.7 |20.4| |HOB-base |52.5 |42.7 |16.1| **ReQ2:** Please refer the results in W2. Thanks again for your valuable suggestions to help up improve our paper presentation. **ReQ3:** This is really a meaningful comment. When we build up our latency modeling, we do not add the roofline analysis, which can help to analyze the memory-bound layers more accurately. Please note that the self-attention module in the core architecture of the transformer is a memory-bound operation due to softmax operations. This is because Softmax operations follow a two-stage dataflow, requiring buffering of intermediate data since direct output generation is not possible. Additionally, the lack of input data reuse in Softmax further complicates efforts to amortize memory costs through computation. However, in our design, the use of window-based self-attention is reduced, and the global attention is replaced by convolution modulation, significantly reducing the number of softmax, thus mitigating the adverse effect of memory-bound operators on speed. Hence, we do not excuse the analysis of memory-bound. Also, through the latency profiling of our HoB-nano backbone, we do not observe that PEs are stalling due to data starvation. Thanks again for your valuable suggestion. We will add one discussion part to clarify our latency modeling with memory IO analysis. Figure B in the pdf submitted for the rebuttal displays the latency percentage of each module. **ReQ4:** Thanks for guiding us for a deeper analysis of the latency prediction model. Our prediction model is mainly utilized to estimate the on-device latency of matrix multiplication. The latency of other operations will not be included during the network search, which is the same as using the benchmarking-based approach to estimate the on-device latency. Based on Q2 of Reviewer BJ1v, the predicted results of the proposed latency model are consistent with the actual on-device latency. Hence, the architecture results of our proposed latency modeling can be similar as benchmarking-based approach. **ReLimitation**: Thanks for your comprehensive and insightful recommendations. Based on your questions, we analyze it from the point of current practical applications and compare our method with other branches of research. We will add a discussion part for these content. [1] U.S.News, “Vehicles That Are Almost Self-Driving in 202”. Online Available. “https://cars.usnews.com/cars-trucks/advice/cars-that-are-almost-self-driving” [2] Pham, Trung, et al. "NVAutoNet: Fast and Accurate 360o 3D Perception For Self Driving." arXiv preprint arXiv:2303.12976 (2023). **Typo:** Thank you very much for pointing them out. We will correct the typos in the revision. --- Rebuttal Comment 1.1: Comment: Thank you for your remarkable effort to provide clarifications and additional experiments to address all raised comments. I acknowledge I have read them, along with the comments of the other reviewers. This significantly strengthens the contribution of the paper, and I am inclined to increase my score accordingly. Would be great to see parts of the added information to the original manuscript and/or appendix.
Summary: This paper proposed a latency-aware design strategy to search for an efficient network structure for BEV perception. To make this happen, this paper proposed a convolutional modulation layer for replacing native self-attention, and proposed to use BN to replace LN to make the network inference faster. Based on such basic module, a hardware-oriented backbone design is proposed for the following network searching process. The experiments demonstrate that the searched network achieved a good trade-off between inference efficiency and performance. Strengths: 1. The proposed idea of using latency prediction model for efficient neural network searching is reasonable. 2. The finally searched result HoTBEV achieves a good trade-off between performance and efficiency. Weaknesses: 1. The writing and organization of the paper are not easy to follow, since lots of technical details are not clearly presented. For example, (a) the detailed structure of the two-phase design space (Eq. 1) is not clear. (b) Fig 5(c) indicates a structure titled Convolution, and the correspondence between Fig.5(c) and HOB block (fig. 5(b)) is not clear. (c) For L230-L236, what do you mean by mentioning patch embedding? Is it path merging in Fig. 5(b)? Did not you simply replace convolutional with larger kernel size with a series of 3x3 convolutions? (d) The structure of the supernet is also simply mentioned without detailed information. 2. The paper does not discuss the searched structure with the existing works, which can not verify whether it is necessary to use the latency prediction model and conduct the network search process. 3. Table 1 lacks of comparison with some latest BEV works. For example, PETRv2 with ResNet101-DCN, PolarFormer, and Sparse4D, and it seems that all of them achieve similar or better performance than the searched results. 4. The illustration of latency prediction model is not clear. In particular, how to get the latency of computation is not clear. Is it reasonable to use the maximum throughput of each PE? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. For the hardware-oriented decoder, what's the difference compared with DETR3D and PETR? Did you just replace native cross-attention with the proposed convolutional modulation layer? 2. Have you evaluated the predicted latency with the realistic latency? How about the accuracy with your latency prediction model? 3. By comparing with native self-attention, how about the effectiveness of the proposed convolutional modulation layer? 4. For L186-L190, can we directly replace the LN in window-based self-attention with BN? Or we must combine the proposed convolutional modulation layer with BN to make the network converge. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The main concern of this paper is still unclear writing and organization. Besides that, it is not clear whether the network search process is necessary and whether it can really achieve state-of-the-art performance with good inference latency. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **ReW1:** Thank you for highlighting areas in our article's expression that need improvement. (a) The detailed structure of the modules in equation 1 is provided in Figure 9 -11 of the appendix, also our searched structures are provided in Table 13-15 of the appendix. We will mention this in the updated version. (b) We replace the channel-wise attention in Fig. 5(b) and Fig.5(d) with our proposed Convolution modulation of Fig5(c). Our Proposed Hardware-Oriented Decoder is shown in Figure 11 of the appendix, we will move this figure to the main paper. (c) For patch embedding (L230-L236), a convolution stem, which consists of three 3x3 convolutions with a stride of 2, is more hardware-efficient than a larger kernel. (d) Our search space (Figure 5 (b)) is a selection of possible blocks, including RepCNN, WMSA, SWMA, and Convolutional modulation. We propose a simple, fast yet effective gradient-based search algorithm to obtain a candidate network that just needs to train the supernet for once. To train our supernet, we adopted the Gumble Softmax sampling to get the importance score for the blocks within each search space/stage. During each step of training, a number of blocks are sampled to obtain a subnet structure. The latency of this subnet can be estimated using our latency prediction model. The detailed training pipeline is discussed in A.10 and Algorithm 1 of the appendix. Our searched structures are provided in Table 13-15 of the appendix. **ReW2:** Leveraging network search allows us to pinpoint the optimal structure tailored to specific hardware constraints. As shown in Table 12 of the appendix, we compare the performance of our search model against four randomly sampled networks, all operating under identical mapping latency. Our searched HotBEV-nano has better latency or higher NDS/mAP on nuScenes than the randomly sampled networks. The utilization of the latency prediction model is essential, especially when crafting hardware-aware structural designs, which need to consider factors such as inference time, energy consumption, and memory footprint. Our method is accurate. It considers the properties of the target hardware, the model type, the model size, and the data granularity. Furthermore, it mathematically describes the computation latency and data movement latency to accurately predict the actual throughput of each layer (See Figure 8 of the appendix). This model serves as a ready-to-use theoretical framework for general GPU architectures. **ReW3:** Our research specifically targets small models, which is why our results are particularly favorable for these models compared to other studies. For a comprehensive understanding, we also present a comparison with baseline models that possess larger backbones and increased input sizes. Due to the word limit, please see Table C in Reviewer **H5E6**’s response. Notably, we surpass baseline models in frames per second (FPS) while maintaining comparable accuracy levels. **ReW4:** Due to the length restriction of the paper, a comprehensive discussion on our latency prediction model can be found in Appendix A.1.2. In summary, the inputs for the latency prediction model include: 1) the structure configuration of a candidate block, 2) the spatial granularity G, 3) the channel dimension C, and 4) the hardware properties are shown in Table 3. The latency of a candidate block is predicted according to the following three steps: 1)Input/output shape definition. 2) Operation-to-hardware mapping. 3) Latency estimation. **ReQ1:** Our contribution lies in three folds: (1) Motivation: Our methods use convolution instead of self-attention to create associations, which are more memory-efficient (linear memory complexity), especially when processing high-resolution images. Due to the modulation operation, our method differs from traditional residual blocks and can adapt to the input content. (2) Convolution Modulation Design: We use the convolutional features extracted by the depthwise convolutions to modulate the weights of the right linear branch via the Hadamard product operation, as illustrated in Equations 3, 4, and 5 of our appendix. (3) Implementation: The decoder module consists of six layers. we replace the attention with convolution modulation in the first three layers to allow for more efficient processing while still capturing important local dependencies. In the remaining three layers, we employ channel-wise attention to effectively capture and incorporate global information into the decoding process, enhancing its overall performance. **ReQ2:** We randomly sample 100 structures with estimated FPS ranging between 15 and 25, then evaluate the actual latency of these structures on NVIDIA V100. As depicted in Figure A inside the pdf submitted for the rebuttal, the predicted latency closely aligns with the actual latency, with their relationship approximating the line y=x. **ReQ3:** Our proposed convolutional modulation is more memory-efficient than native self-attention, especially when handling high-resolution images. This is attributed to its linear memory complexity, as detailed in Equation 3 of the appendix. In Table 5 of the appendix, we also demonstrate the effectiveness of running 2D detection baselines with or without Convolutional modulation. **ReQ4:** We can directly replace the LN in window-based self-attention with BN. Replacing the LayerNorm directly with the BatchNorm in the original self-attention can make the network hard to converge. This approach is feasible even in the absence of convolutional modulation. In Table 5 of the appendix, we ablation the two proposed methods individually on 2D detection baselines. **ReLimitation** Please refer to Reverviewer H5E6: ReW3 for the network search. And we can achieve the SOTA trade-off of performance and latency among the camera-only methods as Table 1 in the paper. --- Rebuttal Comment 1.1: Title: Further Discussions with Reviewer BJ1v Comment: Dear Reviewer BJ1v: Thank you once again for dedicating your time and effort to evaluating our paper. We hope that our rebuttal addresses the concerns you have. If you have any remaining questions or reservations, we are more than willing to engage in further discussion. Your contribution to enhancing the quality of our paper is greatly appreciated. --- Rebuttal Comment 1.2: Comment: Thank you for your comprehensive rebuttal. The references to the supplementary material did address many of my initial concerns. However, I would like to emphasize the importance of a self-contained paper. The introduction of numerous concepts without succinct explanations posed challenges in comprehension. Furthermore, when examining Table C, the marginal improvements in both performance and efficiency, relative to the state-of-the-art, make me question the need for using NAS to optimize the trade-off between speed and performance for 3D BEV detection frameworks. I also note reviewer H5E6's mention of the theoretical latency prediction model. While Fig. 8 presents favorable results in latency comparison, I wonder about the model's ability to accurately estimate on-device speeds across diverse GPU architectures and diverse deployment environments. Taking into account the feedback of my fellow reviewers, I am inclined to adjust my rating towards a borderline acceptance. I would strongly encourage the authors to release their code, thereby enabling the community to thoroughly assess the proposed framework. Taking into account the feedback of other reviewers, if there is an overall positive consensus finally, I am inclined to adjust my rating towards a borderline accept. I would strongly encourage the authors to release their code, thereby enabling the community to thoroughly assess the proposed framework. --- Reply to Comment 1.2.1: Title: Further Discussions with Reviewer BJ1v Comment: **ReFW**: Thanks for your valuable suggestions. We'll elaborate on these concepts in more detail in our final version. Our proposed indeed achieves marginal improvements in both performance and efficiency. However, the reason that we utilize NAS is for efficient and effective model generation for target devices. Please refer to **ReFe2** of Further Discussions with Reviewer H5E6 for our motivations on NAS with latency predictor: Efficient model generation, which also proceeds with AI democratization. Also, please refer to **ReFe2** of Further Discussions with Reviewer H5E6 again for our analysis of The proposed latency predictor, which focuses on modeling the latency of Matrix Multiplication (MM) with generalizability. I want to express my heartfelt gratitude for your thoughtful and invaluable suggestions. We will make the necessary corrections to our article based on your guidance, and we're also excited to release the code as you've advised.
Summary: The paper proposes a novel hardware-efficient transformer-based framework called HotBEV for camera-only 3D detection tasks. The framework is designed to achieve high-speed inference on multiple devices, including resource-limited ones, by considering hardware properties such as memory access cost and degree of parallelism. Extensive results on nuScenes shows the effectiveness of HotBEV. Strengths: - The paper is quite well written. - The authors have conducted a comprehensive evaluation of their approach on the large-scale nuScenes dataset, resulting in a notable balance between accuracy and efficiency when compared to previous publications. - The idea of latency prediction is easy to grasp and effective. Weaknesses: - Please double check whether the abstract matches with the paper. In the abstract the proposed method is called HETR, but in paper it is called HotBEV. - The paper seems to assemble a lot of engineering tricks to achieve the final performance. For example, using BN in FFN, or using RepCNN layers. These tricks are widely known to be effective. - Latency prediction is an interesting idea but I don't think it is quite different from existing designs in hardware-aware neural architecture search. - The authors did not make a fair comparison with stronger baselines such as (the larger variants of) PETRv2, BEVFormer / BEVFormer++, BEVDepth, BEVDet4D, etc. - The authors did not present experiment results on Waymo Open Dataset, on which BEVFormer achieves really good results. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please respond to my concerns in the weaknesses section. New experiment results and more thorough comparisons with SOTA camera-only 3D object detection papers are expected. WOD results are helpful. Please also discuss how the proposed approach takes advantage of unique properties of the task. The current design seems to be assembling bag of tricks for general model design. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have adequately addressed the limitations. === Justifications for final ratings: Pros: + Good experiment results presented in the rebuttal; + Very solid latency measurement; + Promised to release the code. Cons: + Too many tricks in the methodology (mentioned by more than 1 reviewer); + Unclear advantage over transformer-based backbones; + Relatively small advantage over FastBEV, a method that does not require a re-design of the backbone; + Uncertain about the effectiveness of NAS (also pointed out by my colleague). Overall, I believe that the authors have made a good faith effort to address the limitations of their work, but I would like to see the AC carefully investigate the submission before making a final decision. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **ReW1:** We have not updated the abstract of the paper on the OpenReview interface. We sincerely apologize. The title of our submitted paper is “HotBEV: Hardware-oriented Transformer-based Multi-View 3D Detector for BEV Perception”. So our proposed method is ‘HotBEV’. We appreciate your valuable comments. **ReW2:** Thank you for inspiring us to clarify the differences between our design and the existing modules in the final version. Regarding the BN fusion technique, directly substituting Layernorm with Batchnorm in the original self-attention often proves challenging, as the model struggles to converge during training. One distinct advantage of our design, particularly with window-based self-attention, becomes evident when compared to other transformer architectures. Our approach can seamlessly replace Layernorm with Batchnorm, subsequently applying the fusion technique without sacrificing accuracy, as illustrated in Figure 9 of the appendix. In our research, we incorporated RepCNN as a candidate within our proposed latency-aware architectural search. This was to analyze the balance between detection precision and the efficiency derived from using RepCNN. Through our experiments, we found that substituting RepCNN in both the 1st and 2nd stages yielded optimal results, as detailed in Table 9 of the paper. **ReW3:** Traditional hardware-aware network search methods usually depend on the hardware deployment of each candidate within the search space to ascertain latency, a process that is both time-consuming and inefficient. A single candidate demands hundreds of inferences to generate an accurate latency, prolonging the search process. Some contemporary methods, like HAT[1], leverage a latency predictor. This predictor, pre-trained on thousands of real-world latency data points, serves as an offline tool to anticipate candidate latency, rather than determining real latency via inference during the search. However, it still necessitates hundreds, even thousands, of actual speed tests across diverse model structures to form a robust training dataset to train an accurate latency predictor. Moreover, is only applicable to a relatively small search space. For larger search spaces, an increased volume of measured latency data is required as a training set for the predictor, substantially raising the time cost. If the test set is inadequate, the predictor fails to estimate the latency accurately. Our methodology stands out in its accuracy and efficiency. Our latency prediction model is a training-free theoretical model, suitable for general-purpose hardware, GPU. It considers the properties of the target hardware, the model type, the model size, and the data granularity. It then quantitatively captures both the computation latency and data movement latency, enabling it to precisely predict the actual throughput for each layer, as depicted in Figure 8 of the appendix. **ReW4:** Our research specifically targets small models, which is why our results are particularly favorable for these models compared to other studies. For a comprehensive understanding, we also present a comparison with baseline models that possess larger backbones and increased input sizes in Table C. Notably, we surpass baseline models in frames per second (FPS) while maintaining comparable accuracy levels. Table C. Comparison on nuScenes val set. |Methods|Backbone|Resolution|NDS↑|mAP↑|mATE↓|mASE↓| mAOE↓| mAVE↓| mAAE↓|FPS| |-|-|-|-|-|-|-|-|-|-|-| |PETR|ResNet101|1600×900|0.442 |0.37|0.711|0.267|0.383|0.865|0.201|5.7| |PETRv2|ResNet101|1600×640|0.524|0.421|0.681|0.267|0.357|0.377|0.186|-| |BEVDet4D|Swin-B|1600×640 |0.515|0.396|0.619|0.26|0.361|0.399|0.189|-| |BEVDepth|ResNet101|512×1408|0.535|0.412|0.565|0.266|0.358|0.331|0.19|2.3| |BEVFormerv2|ResNet50|1600×640|0.529|0.423|0.618 |0.273|0.413|0.333|0.181|-| |PolarFormer-T|ResNet101|1600×900|0.528|0.432|0.648|0.27|0.348|0.409|0.201|3.5| |Sparse4D|ResNet101-DCN|900×1600|0.541|0.436|0.633|0.279|0.363|0.317|0.177|4.3| |HotBEV|HOB-base|512×1408|0.525|0.427|0.62|0.221|0.36|0.55|0.163|5.5| **ReW5:** Thank you for pointing this out. We ran our HotBEV on Waymo Open Dataset and compared it with State-of-the-art models BEVFormer++ and PETRv2. The results are shown in Table D. Table D. Comparison on the Waymo val set. |Methods | Backbone | mAPL↑ |mAP↑ | mAPH↑ | |------------|-----|-----|-----|-----| |BEVFormer++ |ResNet101-DCN |0.361 |0.522 |0.481| |PETRv2 |ResNet101 |0.366 |0.519 |0.479| |HotBEV |HOB-base |0.371 |0.537 |0.598| **ReQ1:** Thanks for your valuable suggestions. Based on our analysis of GPU performance, we observed that the primary source of latency consistently stems from the backbone. In order to mitigate this speed bottleneck without compromising detection precision, we introduce a potent transformer encoder for feature capturing and fusion. We partition the backbone into four stages, mirroring the data flow granularity seen in ResNet architecture. As the features transition from local to global visual receptive fields, we introduce the HOB block design. Within each HOB block, we employ a sequence of local-wise attention mechanisms to extract local information, i.e., texture-level semantics. This is followed by global attention to enhance abstract-level semantics across the feature map. Moreover, to further bolster low-level semantics within the current stage, we insert a semantic-augmented module comprising an upsampling layer and global attention after every two consecutive HOB blocks, excluding the interval between Stage 1 and 2. To amplify texture-level semantics, we foster information exchange not only within stages but also between them. By leveraging the efficient operators outlined in Section 3.1.2, including Convolutional Modulation, BN Fusing, and Multiple Branch Fusing, we introduce the concept of a "two-phase design space" (DS) for the HOB backbone. [1] HAT: Hardware-Aware Transformers for Efficient Natural Language Processing --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: I would like to appreciate the authors' response first. It requires tremendous efforts to provide new results such as those on the Waymo Open Dataset. I just want to confirm whether you provided correct numbers on WOD. The mAPH on WOD is usually lower than mAP (since it is a harder metric). For your response on latency predictors, I agree that the method from Wang et al. requires explicit collection of a latency dataset. However, the construction of such a dataset is very cheap. For each candidate network structure, you only need to run benchmarking once (maybe thousands of forward passes is sufficient). So it will perhaps only take one day to generate a very large dataset. Then you can easily train a trivial regression model on this dataset. I don't think it is necessary to develop a hardware model. Based on the factors you take into account in your model, I strongly feel that many important factors are missed: e.g. the effect of GPU tensor cores and the cache hit ratios. To the best of my knowledge, even strong GPU simulators cannot accurately model the behavior of latest NVIDIA GPUs (e.g. A100 and H100), so I do not think your model has good enough generalizability and I still prefer the more straightforward approach from Wang et al. and many similar papers. Besides, the paper obviously advertises the efficiency of the proposed method. I'm curious if it is possible to measure all the latency numbers using a TensorRT backend in the future versions (including all the numbers for baseline methods). This could make the results more solid. So far, I did not observe a clear advantage over Sparse4D and PETRv2. It is very likely that PETRv2 is as fast as PETR (or even faster, because of the smaller input resolution), and it is noteworthy that PETRv2 is an important baseline that directly inspires this work. Therefore, if the method cannot show a significant advantage over PETRv2 under all model sizes, I don't think it is possible to recommend accepting this paper into NeurIPS 2023. Finally, whether the paper is accepted or not, I would suggest the authors to prioritize results on WOD in the future versions. It seems that the improvements on WOD is more obvious, and WOD is considered as a larger and more representative dataset compared with nuScenes. However, I do understand that if the authors are going to do so, it is also necessary to re-run many baselines on WOD. --- Reply to Comment 1.1.1: Title: Further Discussions with Reviewer H5E6 Comment: **ReFe1**: Sorry about our confusion. We entered incorrect numbers, and we should swap the values in the mAP and mAPH columns. We made a mistake. Your valuable suggestion is greatly appreciated. **ReFe2**: For the latency predictors, we couldn't agree with you more. Your perspectives also provide opportunities for our designs. (1) Efficient model generation, which also proceeds with AI democratization. As mentioned that benchmarking-based approach needs one-day training, our proposed theoretical latency predictor is training-free. For example, the benchmarking-based approach requires 5 days to generate the dataset of 5 different devices if 5 target models are demanded. In contrast, our proposed is off-the-shelf. Our proposed method provides the opportunity for inexpensive and efficient research for users who do not have access to target devices. For instance, when the in-vehicle Orin chip is not accessible, the related efficient model research on the Orin chip can still be advanced. In conclusion, our approach makes sense for today's rapidly growing demand for autonomous driving. (2) The proposed latency predictor focuses on modeling the latency of Matrix Multiplication (MM) with generalizability. I agree with you that 'strong GPU simulators cannot accurately model the behavior of latest NVIDIA GPUs.' However, our purpose is not to describe the behavior of GPUs. We want to reflect on the relative performance of latency for different layer types and sizes on target GPUs. This is because our search goal is to minimize the relative time in the search space of the current device. For the GPU tensor core you mentioned, we searched based on the theoretical peak of the FP32 tensor core, so the tensor core characteristics were considered. As for cache hit ratio, in GPU, it mainly happens in the following cases: data transfer between layers, data layout change (data transpose), data type conversions (reformat, e.g., from FP32 to INT8 ), nonlinear kernels (nonlinear functions: low data reuse). However, in the network search, what the user needs to know is the on-device latency of each layer, i.e., the latency of matrix multiplication. This means the scenarios mentioned above of cache hit need not be considered. Our prediction model is mainly utilized to estimate the on-device latency of matrix multiplication, which is the same as using the benchmarking-based approach to estimate the on-device latency. For generalizability, our design focuses on latency modeling of MM, the typical computation operation in DNNs, which is mainly impacted by the computing performance of Tensor Core, not other specific operators, so the proposed predictor has generalizability, as shown in Figure 7 of our paper.
Summary: This paper presents HotBEV, a new model developed for 3D detection tasks. By prioritizing actual on-device latency and considering key hardware properties, HotBEV achieves impressive reductions in computational delay. This optimization allows for real-time decision-making in self-driving scenarios, making it a significant contribution to the field. The model's versatility, being compatible with both high-end and low-end GPUs, further underscores its practical value. Rigorous experimental validation showcases the model's superior performance in terms of speed and accuracy compared to existing solutions. Strengths: 1. The model is compatible with both high-end and low-end GPUs, demonstrating a broad range of applicability. 2. The proposed method successfully achieve a delicate balance between model speed and detection precision. 3. They utilize a theoretical latency prediction model to guide their design, an innovative approach that differs from the typical focus on computational FLOPs. Weaknesses: 1. The comparison is not fair in main experiments. The length of temporal fusion is critical to model performance. SoloFusion [1] suggests that a longer temporal sequence does not affect the model's FPS (Frames Per Second). This paper's methodology employs four frames for temporal fusion, while most comparative methods use only one to two frames. 2. The exploration of utilizing convolutional modulation, as opposed to self-attention, for relationship building is already documented in certain works [2], which have not been incorporated into this paper's analysis. [1] Time Will Tell: New Outlooks and A Baseline for Temporal Multi-View 3D Object Detection. [2] You Only Segment Once: Towards Real-Time Panoptic Segmentation Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. To ensure an equitable comparison, the inclusion of SoloFusion with a four-frame input is indispensable. 2. Comprehensive relevant literature should be integrated into this paper for completeness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: This paper discuss the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **ReQ1:** For a fair comparison with SoloFusion, we tested a 4-frame version of SoloFusion, as shown in Table A. Across multiple benchmarks, our HotBEV consistently outperforms SoloFusion, with the exceptions of mATE and mAVE. Furthermore, HotBEV achieves a 35% faster FPS compared to SoloFusion. Table A. State-of-the-art Comparison on nuScenes val set. |Methods | Backbone | Resolution |Frames | NDS ↑ | mAP↑ | mATE↓ | mASE↓ | mAOE↓ | mAVE↓ | mAAE↓ | FPS | |------------|-----|-----|-----:|-----|-----|-----|-----|-----|-----|-----|-----| |SOLOFusion| ResNet50| 256 × 704| 16| 0.534| 0.427| 0.567| 0.274| 0.511| 0.252| 0.181| 11.4| |SOLOFusion| ResNet50| 256 × 704| 4| 0.494| 0.362| 0.607| 0.304| 0.539| 0.293| 0.19| 12.2| |HotBEV| HOB-base| 256 × 704| 4| 0.506| 0.369| 0.625| 0.264| 0.362| 0.364| 0.153| 16.5| **ReQ2:** Thank you for sharing this paper. Upon review, we observed that while both our methods employ convolution, our design demonstrates greater efficiency. YOSO incorporates a Separable Dynamic Decoder, specifically substituting the Multi-Head Cross-Attention with the Separable Dynamic Convolution, which consists of a depthwise convolution followed by a pointwise convolution. As detailed in Equation 10 of [2], its computational complexity is represented by $2ndt+2n^2d$. Notably, this complexity grows quadratically with increasing sequence length N. In contrast, our approach employs depthwise convolution combined with the Hadamard product to determine the output, as illustrated in Equations 3, 4, and 5 of our appendix. Our computational demands rise linearly, rather than quadratically, as the image resolution escalates. Moreover, it's evident that smaller models derive greater benefits from the Hadamard product [1]. When we substituted our design with YOSO's Separable Dynamic Convolution, the results are presented in Table B. Table B. Convolutional Modulation Comparison on nuScenes val set. |Methods | Backbone | Resolution |Frames | NDS ↑ | mAP↑ | mATE↓ | mASE↓ | mAOE↓ | mAVE↓ | mAAE↓ | FPS | |------------|-----|-----|-----:|-----|-----|-----|-----|-----|-----|-----|-----| |HotBEV |YOSO| 256 × 704| 4| 0.498| 0.36| 0.643| 0.277| 0.371| 0.375| 0.162| 15.1| |HotBEV |HOB-base| 256 × 704| 4| 0.506| 0.369| 0.625| 0.264| 0.362| 0.364| 0.153| 16.5| [1] Conv2Former: A Simple Transformer-Style ConvNet for Visual Recognition [2] You Only Segment Once: Towards Real-Time Panoptic Segmentation --- Rebuttal Comment 1.1: Comment: Thank you for providing a comprehensive rebuttal that addresses the majority of my concerns. As a result, I am leaning towards revising my evaluation to a borderline acceptance score. However, it should be noted that there are too many tricks in this paper which was pointed out by Reviewer H5E6.
Rebuttal 1: Rebuttal: We first sincerely thank every reviewer for your insightful and constructive feedback. Then, we will answer the specific questions from each reviewer. We upload a pdf file with figures (Figure A,B) which we will present in our rebuttal. Pdf: /pdf/6f567aee476f88c7450b9193ddf6ec097bed0d47.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
On the Last-iterate Convergence in Time-varying Zero-sum Games: Extra Gradient Succeeds where Optimism Fails
Accept (poster)
Summary: This paper considers the problem of unconstrained min-max optimization with bilinear structure. Specifically, this paper considers the setting where the payoff matrix $A_t$ changes over time. The two changing dynamics of $A_t$ considered in this paper is the periodic game and the converging perturbed game. Three strategy dynamics are cosidered in this paper, which is OGDA, extra gradient (EG) and Negative momentum method. In this paper, the authors show that in the period game case, under certain initialization, OGDA provably diverges no matter what learning rate is chosen while EG with a certain choice of learning rate converges to the common NE with exponential rate. In the convergent perturbed game case, all the three types of algorithms converge with a proper choice of learning rate. The analysis is basd on analyzing the eigenvalues of the linear operator in the recursive form of the strategies. Empirical results further verifies their theoretical results. Strengths: - The min-max optimization with time-varying functions is an important question and has wide real-world applications. - This paper first considers the last-iterate convergence performance for the unconstrained bilinear min-max problem and show that under two different changing dynamics of the bilinear matrix, OGDA performs provably differently and EG always converges. Specifically, the diverging results for OGDA looks intereting to me. - I did not check all the proof details but the general idea of the proof makes sense to me. Weaknesses: - One issue is whether the optimization of unconstrained min-max optimization with bilinear structure is interesting. Specifically, even when the game is changing over time, one trivial Nash Equilibrium is $(0,0)$. Therefore, all the three algorithms initialized with $(0,0)$ will converge, though the dynamic of these three algorithms may still be interesting with different initializations. Extending the current framework to a slightly general case $x^\top A_ty+b_t^\top x+c_t^\top y$ or an even more general convex-concave function $f(x,y)$ will strengthen the results a lot. - As the authors mentioned, the constrained version of the min-max optimization problem may be more interesting and the dynamic of the strategy can not be expressed in a clean linear recurrsive form as shown in the unconstrained bilinear case. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - As mentioned in weakness, whether the current result can be extended to a generalized linear case with $f_t(x,y)=x^\top A_ty+b_t^\top x+c_t^\top y$? - Is there a specific reason considering the periodic games and the convergent perturbed games? Can the results generalize to some other type of time-varying games with common NE? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See weakness and questions for details. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for your support and interests to our results, especially we thank you for proposing important and challenging question to strengthen current results. Please see our itemized responses below: 1. Whether the current result can be extended to a generalized linear case with $f_t(x,y) = x^{\top}A_ty + b^{\top}_tx + c^{\top}_ty $ ? Thank you for asking this important question. Generalizing the current results to other settings, such as convex-concave payoff, is definitely an important question for future research. Our results can be extended to the above general linear case, as the linear payoff can be reduced to bilinear case with translation of the strategy variables. Similar technologies were also introduced in (Zhang et al. convergence of gradient methods on bilinear zero-sum games) in the stationary game setting. In the following we provide a more detailed explanation. For convenience we formulate the payoff function as \begin{align}%\label{gene} f_t(x,y) = x^{\top}A\_ty + x^{\top}b\_t + c^{\top}\_ty. \\ \\ \\ (1) \end{align} By first order stationary condition, if a point $(x^*,y^*)$ is a equilibrium of $f_t(x,y)$, then \begin{align}%\label{equi} A\_ty^* = - b\_t, \ A^{\top}\_t x^* = -c\_t.\\ \\ \\ (2) \end{align} For a periodic game with payoff matrix $\\{A\_t\\}^{\infty}\_{t=1}$, payoff vectors $\\{b\_t\\}^{\infty}\_{t=1}, \\{c\_t\\}^{\infty}\_{t=1}$ and period $\mathcal{T}$, if a strategy point $(x^*,y^*)$ is a common equilibrium, then it satisfies (2) for any $t \in [\mathcal{T}]$. We assume such common equilibrium exists, thus there exist $(b,c)$ such that \begin{align}%\label{111} A\_t b = - b\_t, \ A^{\top}\_t c = -c\_t,\ \forall t \in [\mathcal{T}]. \\ \\ \\ (3) \end{align} Now we denote $x' = x - c,\ y' = y - b$, then it can be verified that \begin{align} \min\_x \max\_y f\_t(x,y) = \min\_{x'} \max_{y'} (x')^{\top}A\_ty'+c^{\top}A\_tb. \end{align} Thus every periodic game with general linear payoff is an equivalent to a bilinear periodic game considered in the current paper, thus the result in Theorem 3.1. also holds in general linear payoff case. However, according to (3), it may be difficult for a periodic game with a general linear payoff to have a common equilibrium. This is different from the bilinear game where $(0,0)$ is always a common equilibrium. For convergent perturbed game, it can be demonstrated that Theorem 3.3. still holds true with a similar translation of the variables and techniques employed in the paper. **** 2. Is there a specific reason considering the periodic games and the convergent perturbed games? Can the results generalize to some other type of time-varying games with common NE? For the first question, please refer to the second part of the global rebuttal. For the second question, generalizing the current results to other type of time-varying games with common NE is a very interesting question. It is not clear to us if our results would carry over for more general class of games with common NE and if our techniques will be applicable. An assumption that probably is needed if one wants to show convergence (e.g., of extra gradient), is that the difference $||A_{t+1}-A_t||_2$ is not too large for all $t$. We would like to mention that general theory of non-autonomous linear differential/difference equation is not as simple as autonomous ones, it is still a quite open-ended area by itself (see Fritz Colonius and Wolfgang Kliemann. Dynamical systems and linear algebra, volume 158.344 American Mathematical Society, 2014.). Thus we believe that any interesting generalization will depends on some specific observations case by case, which means there are many potential problems to be answered.
Summary: This paper studies the last-iterate behavior of three different algorithms on two types of time-varying zero-sum games: periodic and convergent perturbed games. For periodic games, the authors prove that EG will converge while OGDA and the negative momentum method could diverge. For convergent perturbed games, all these algorithms converge as long as the error term decays sufficiently fast. Strengths: 1. The paper is well-organized and easy to follow. 2. The different behavior between EG and OGDA seems novel in the literature. Weaknesses: 1. Some results lack an intuitive explanation. 2. Some figures are not well-plotted. Maybe a log-log graph is more proper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Could the authors provide some examples of periodic and convergent perturbed games? 2. In line 225, the authors say that the key point on the convergence of EG for periodic games is that the iterative matrix is a normal matrix. Could the authors give a more intuitive explanation? Moreover, the update rule of EG only depends on the current point $(x_t, y_t)$, while OGDA and negative momentum method also need to employ the gradient information at the last point $(x_{t-1}, y_{t-1})$. Is this a reason for the different behavior of these methods? 3. For the experiments on Theorem 3.4, the authors only plot the convergent curve of EG. I wonder about the behabior of the other two methods in this case. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See Questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your supportive comments, suggestions and questions on intuitive explanation and experiments. Please see our itemized responses below: 1. Could the authors provide some examples of periodic and convergent perturbed games? Please refer to part 2 of the global rebuttal for an explanations of this question. **** 2. Intuitive explanation for why EG's iterative matrix is a normal matrix. Thank you for asking this important question. "EG's iterative matrix is normal" is an observation that is also surprising to authors. Our intuition actually came from experiments. First of all, we expect that all iterative matrices to be diagonalizable. However, we observe that the dynamics never diverges, which is a much stronger phenomenon than being diagonalizable. Since normal matrices has eigenvalues with magnitudes at most 1, and this means their dynamics never diverge, which agreed with all the experiments we had run. Therefore, we turn our attention from diagonalizable to normal and eventually discover only EG is normal, and it became the key property for periodic settings. For the differences between EG and optimistic/momentum methods, other than their disparities in eigenvalues/eigenvectors. The suggestion that the differences in dependence between the update rules and past payoffs is an interesting viewpoint, we will consider this to see whether it can give an intuitive explanation. We also verify that the iterative matrix of gradient descent algorithm, which also uses only one step information is normal matrix. **** 3. Behaviors of the other two methods in this case in the experiments on Theorem 3.4, The other two methods also converge under the conditions of experiments on Theorem 3.4. However, we have also found a numerical example that does not satisfy the BAP assumption, and it seems that the optimistic gradient/momentum method does not converge in this example. Please refer to the first part of global rebuttal for additional experiments and explanations of this question. --- Rebuttal Comment 1.1: Title: Thanks for the replies. Comment: Thanks for the detailed replies, which have addressed my concerns. I increase my score to 6. --- Reply to Comment 1.1.1: Comment: Thank you for your interest to the work and intriguing questions. We appreciate your boosting to the score, thank you so much!
Summary: This paper studies the problem of learning Nash equilibria in two-player zero-sum bilinear games where the payoff matrix varies with time. When the payoff matrix is a periodic function, it is proven that the extra gradient algorithm converges to a Nash equilibrium, whereas the optimistic gradient descent ascent and negative momentum method could diverge from the equilibrium. Furthermore, it is shown that all the above three algorithms converge to the equilibrium for the convergent perturbed game. Strengths: * The paper is well-written and studies an interesting problem. * The author provided an instance of periodic zero-sum games where extra gradient algorithms and optimistic gradient descent ascent algorithms behave distinctly differently. * The analysis looks to be sound, and the experimental results seem to bare out the theory. Weaknesses: * Perhaps a short intuitive explanation of why extra gradient is better than optimistic gradient descent ascent would help readers understand the main results of this paper. * The titles and legends in Figures 1,2,3 and 4 should be enlarged. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * (Section 2.1) What is the definition of $\mathrm{ker}(\cdot)$? * (Section 3.2) What are the key differences between the extra gradient algorithm and the other two algorithms? * (Section 3.2) For the periodic games with $\lambda_{\ast}=0$, would similar convergence/divergence results hold to Theorem 3.1 and 3.2? * (Section 4) Do the optimistic gradient descent ascent algorithm and negative momentum method empirically converge in the same instance in the experiments for Figure 4? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your support, careful reading and helpful suggestions. Please see our itemized responses below: 1. What is the definition of $\ker(\cdot)$ ? For a payoff matrix $A \in \mathbb{R}^{n \times m}$, $\ker(A)$ is defined to be the set $\\{ x \in \mathbb{R}^m |\ Ax = 0 \\}.$ **** 2. What are the key differences between the extra gradient algorithm and the other two algorithms? *A quick answer: Only the iterative matrix of extra gradient is a normal matrix, while the other algorithms' iterative matrices are just diagonalizable but not normal. *A detailed answer: From the techniques we used to prove the separation between extra gradient and the other two methods, the key difference is that the iterative matrix of extra gradient algorithm is a normal matrix, while the other two algorithms are not. For example, in the period case, the advantages of normality are twofold : it helps to bound the modulus of eigenvalues of iterative matrix and makes the strategy components that do not correspond to equilibrium vanish over time. In the following, we provide further explanations. Firstly, let's denote the overall iterative matrix as $\prod_{t=1}^T \mathcal{A}\_t$, which is product of iterative matrix for a period of $T$. For matrix $\prod_{t=1}^T \mathcal{A}_t$, the maximum modulus of eigenvalues is no more than 1 due to the normality, thus during the iterative process, the strategy can't diverge. Secondly, we introduce a decomposition of the two players' current strategy $X_t$ into two parts. The first component $X_t^1$ represents eigenvector corresponding to eigenvalue $1$ of the iterative matrix, which corresponds to an equilibrium of the game. The second component $X_t^2$ consists of linear combinations of other eigenvectors. Due to the normality property of the iterative matrix of extra gradient, $X_t^1$ remains unchanged over time and modulus of $X_t^2$ approaches zero as time goes to infinity. (See Corollary B.4 and proof of Theorem 1 in Appendix B). This ensures convergence of extra gradient algorithm towards Nash Equilibrium point. For the other two algorithms, without the normality property, for the overall iterative matrix, as we shown in the example (See Theorem 3.2 and its proof in Appendix C), its maximum modulus of eigenvalues can be larger than 1. Then, we can choose the initial point which makes the strategy point diverge from Nash Equilibrium point. The normality of iterative matrix of extra gradient also plays important role in convergent perturbed game. **** 3. For the periodic games with $\lambda_* = 0$, would similar convergence or divergence results hold to Theorem 3.1 and 3.2? Yes, the result still hold, since in this case from the general result in linear algebra, the iterate matrix $\prod^T_{t=1} \mathcal{A}_t$ will be a nilpotent matrix, which means after finite time iteration, the strategy $(x_t,y_t)$ must come to be the trivial equilibrium $(0,0)$. The divergence result in Theorem 3.2 depend on the the fact that for the special periodic game defined by the payoff matrix \begin{align} A_t= \begin{cases} \left[1,-1\right], & t \textnormal{\ \ is \ odd} \\\\ \left[-1,1\right], & t\textnormal{\ \ is \ even} \end{cases}, \end{align} the iterate matrix of optimistic gradient and negative momentum method always have a eigenvalue with modulus larger than $1$, combine with Floquet theorem (proposition 2.4, line 166), this is enough to show the exponential divergence result, and whether $\lambda_* = 0$ is irrelevant for Theorem 3.2. **** 4. Do the optimistic gradient descent ascent algorithm and negative momentum method empirically converge in the same instance in the experiments for Figure 4? Yes, both the optimistic gradient descent ascent algorithm and the negative momentum method also converge in the same instance in the experiments for Figure 4. However, we also find examples where these two methods do not empirically converge due to violations of the BAP assumption, please refer to part 1 in the global rebuttal for detailed explanation. **** Thank you for your comments on the titles and legends in the figures, we will address these issues in the new version of the paper.
Summary: Over the last few years an extensive literature has studied the last iterate convergence of learning dynamics in zero-sum games, particularly those using optimism and extra-gradient approaches. This paper extends that approach to two classes of unconstrained non-stationary game (periodic and decaying noise) and shows a difference in behavior between the two classes. Strengths: The problem studied is natural and clearly explained with strong results, including the interesting split in behavior (Theorems 3.1 and 3.2) The approach uses several technical innovations derived from the literature on non-autonomous linear difference systems. Weaknesses: The particular classes of games studied could be better motivated. Is there a reason to focus on these other than technical convenience? Do they have important applications? What more general but harder class is this a step towards? The conclusion mentions constrained games as future work, nothing about other interesting directions fof future work. Are there other theorems from non-autonomous linear difference systems that seem promising? There is a gap left by Theorems 3.3 and 3.4 about whether the other two dynamics converge under the weaker conditions of Theorem 3.4. Figure 4 gives an experiment on this setting, so it seems like a missed opportunity to not at least show what the other dynamics do on this example. Minor comments: - There is some inconsistency in the way “equilibrium” is used. It is defined on line 98 as what I would instead have called the set of equilibria. However, in other places, like line 252 it seems to be used to refer to a single element of this set. Depending on how this is resolved, some care may be needed in places about whether something is “the” or “a” equilibrium (e.g. on line 216). - Line 178 and 181 the formulas use n but the limits are taken over t - Line 216 the “common Nash equilibrium” is not defined, nor is there an explanation of why it mush exist (presumably because 0 is always an equilibrium?) - Line 305 group[s] Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please respond regarding the two main weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The conditions to which the results apply are clearly explained. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your support, interest to current work, and inspiring question for future work. Please see our itemized responses below: 1. Is there a reason to focus on these classes of games other than technical convenience? Do they have important applications? What more general but harder class is this a step towards? For the first and second question, please refer to the second part in global rebuttal for a detailed explanation. For the third question, we believe that the class of convergent perturbed games with a convergence rate slower than $\frac{1}{t}$ is more challenging compared to our current setting. As discussed in Section 5, the dynamical behaviors of optimistic gradient and negative momentum methods under this setting remain unclear. Based on the numerical results presented in the experiments in global rebuttal, we believe that the dynamical behaviors of optimistic gradient and negative momentum method under this setting are much more complex than those in the BAP setting, as stated in Theorem 3.3. **** 2. The conclusion mentions constrained games as future work, nothing about other interesting directions of future work. Are there other theorems from non-autonomous linear difference systems that seem promising? Stability theory of non-autonomous dynamical system from learning algorithms has been studied recent years, e.g. "AdaGrad Avoid saddle points, Antonakopoulos et al." and "First-order method almost always avoid saddle points:the case of vanishing step-size, Panageas et al.". Both works investigate non-autonomous dynamical systems from non-convex minimization problems. Their method belongs to a general theoretical framework called Lypunov-Perron method and stable manifold theorem, which enables one to prove certain dynamical systems do not converge to unstable fixed points. One promising future direction is to fit such method into min-max optimization algorithms so that we can have a better understanding on the asymptotic behavior of different algorithms in time-varying settings. One might expect some result similar to "The limit points of (optimistic) gradient descent in Min-Max optimization, Daskalakis et al.", but in a time-varying setting. Since the Lyapunov-Perron method depends on spectral analysis of update rules (spectral analysis is a main technique in all aforementioned papers), it is interesting to explore if these techniques can be combined with current paper. **** 3. Gap between Theorem 3.3 and Theorem 3.4. Please refer to the first part of the global rebuttal for additional numerical results and explanations regarding this question. **** Thank you for the other comments on writing, the typos will be fixed in the new version of the paper. --- Rebuttal Comment 1.1: Title: Thanks Comment: This addresses my questions --- Reply to Comment 1.1.1: Comment: We are glad to answer your questions, thank you for proposing them. Thanks again for your strong support!
Rebuttal 1: Rebuttal: We appreciate the efforts of all reviewers, thanks for your constructive questions and critical suggestions! The PDF file contains additional experimental results as requested by reviewers. If you have questions about the experimental parts of the paper, please refer to this file. In the following, we address two questions that have been raised by several reviewers. **** 1. Gap between Theorem 3.3 and Theorem 3.4., and behaviors of optimistic gradient/momentum method in the instance in Figure 4 of the paper. We provide additional experiments to demonstrate the behaviors of the optimistic gradient and negative momentum methods in convergent perturbed games that do not satisfy the BAP assumption. The numerical results reveal cases where optimistic gradient/momentum method converge ( Figure 1 in PDF records experimental results) and cases where they do not converge (Figure 2,3 in PDF records experimental results). In the same setting as the experiments on Theorem 3.4 , we find that both optimistic gradient descent ascent and negative momentum method converge as shown in Figure 1 of the PDF records experimental results. However, there are other cases in which these two algorithms do not converge. In Figure 2 and 3, we present one such example. Here the payoff matrix is chosen as $A = [ [1,0],[0,0]], B = [[0,8],[0,0]]$ and \begin{align} A_t= \begin{cases} A, & t \textnormal{\ \ is \ odd} \\\\ A + (1/t^{0.1}) * B , & t\textnormal{\ \ is \ even} \end{cases}. \end{align} In Figure 2, the numerical results show when using a step size of $0.015$, optimistic gradient and negative momentum algorithms will diverge, but extra gradient will converge. In Figure 3, by reducing the step size to very small numbers ($0.0003$ or $0.0001$ in this case), optimistic gradient and negative momentum algorithms do not seem to diverge but maintain a nonzero distance from the equilibrium points. Based on these numerical results, we believe that beyond the setting that satisfies the BAP assumption mentioned in line 256 of the paper, there exists a more complex relationship between the dynamical behaviors of optimistic gradient and negative momentum methods and their respective step sizes, which presents an interesting question for future exploration. **** 2. Reasons to consider periodic/convergent perturbed game and examples of them. We note that both of these game classes have been studied in previous literature as testing grounds of learning algorithms in online learning community ([11], [14]). However, our paper focuses on different viewpoints compared to the existing research, making it a natural extension of this line of research. Moreover, periodic game and convergent perturbed game are natural generalizations of the usual repeated game formulations. When the period $\mathcal{T} = 1$, the periodic game becomes a repeated game, and when the perturbation $B_t = 0$, then convergent perturbed game becomes a repeated game. The periodic games model competitive settings where the exogenous environment varies periodically. This naturally occurs in competitions where day-to-day trends, and seasonality can affect the game between players. For example, consider a competition between two species whose life states are affected by seasonal changes or sellers in a fish market, where the value of fish depends on their freshness, thus exhibiting daily period behavior. Periodic zero-sum games can also fit into the frameworks of multi-agent contextual games (Sessa et al. Contextual games: Multi-agent learning with side information, NIPS 2020). In a multi-agent contextual game, the environment selects a context from a set before each round of play and this selection determines the specific game that will be played. Periodic zero-sum games can be viewed as a multi-agent contextual game where the environment periodically chooses contexts from the available set, creating zero-sum games with a common equilibrium. The convergent perturbed game naturally models games with noise that decays over time. The noise can arise from players' own beliefs, such as in an auction where a certain type of goods is being auctioned off day after day to buyers, and the buyers' assessment of the value of this good will eventually converge to a fixed value as their experience growth. The noise can also model external effects when players repeatedly play the game, such as interference factors in the feedback process. **** We are looking forward to further discussion in the next stage of review process. We thank your again for all your hard work! Pdf: /pdf/77ce080c66101e7a95d3be6b5d181ccbb2308851.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work investigates the problem of last-iterate convergence in time-varying zero-sum games. Specifically, the authors study the last-iterate convergence of three kinds of algorithms (OGDA, EG, and negative momentum method) considering two kinds of time-varying games with specific structures, i.e., periodic games and convergent perturbed games. The authors obtained a convergence rate of EG in periodic games while providing a counterexample showing that OGDA and negative momentum method will diverge in this kind of game. In convergent perturbed games, the authors showed that all three kinds of algorithm will converge with a rate dependent on the perturbation $B_t$. Finally, experiments validate the statements proposed by the authors. Strengths: In general, I think there is no last-iterate convergence rate in time-varying games due to the changing nature of the games. However, with specific problem structures, I believe that last-iterate convergence results can be established, and this work provides a piece of clear evidence. Overall, although the solutions are relatively simple by modeling the learning algorithms as dynamical systems and leveraging some existing results in the control theory, I am satisfied with the motivation and the final results of this work. Weaknesses: There are no significant weaknesses that affect my rating of the whole paper. In the following, I only list some minor issues. 1. The notion of $T$ in periodic games. The notion of $T$ is commonly used to refer to the total number of rounds in online learning. In this work, the authors use $T$ to represent the number of rounds inside each period, which may lead to unnecessary understandings. I suggest the authors could choose a different notion to denote the period length. 2. The notion of $\text{ker}(\cdot)$. This notion, which appears in Line 98 for the first time, seems not to be defined or given a clear description before. 3. In Line 120, the authors mention that 'Recently, there are also works analyzing the regret behaviors of OGDA under a time-varying setting [1]'. The reference to [1] is not accurate enough. The work of [33], which is published at ICML2022, firstly gives a comprehensive study of the optimistic methods in time-varying zero-sum games by optimizing multiple performance measures simultaneously. I suggest the authors could refer to [33] in the aforementioned statement to make the credits more accurate. 4. The negative momentum method requires the $x$-player to evolve to the next round first to give its gradient $A_{t+1} x_{t+1}$ to the $y$-player. In some cases where the learning procedure is strictly in an online style, this algorithm is NOT applicable since both players must act first to make the game evolve. I suggest the authors could give a discussion on this point in the revised version. Typos: 1. Line 89: 'Section 4' not 'section 4'. 2. Line 188: 'Section 3.3' not 'Section3.3'. 3. The reference to theorems are not unified, e.g., in Line 237, 'theorem 3.2', and in Line 255, 'Theorem (3.1)'. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I only have a major question on Theorem 3.1. What is the range of $t$? If $t$ is only chosen in $[1,T]$, choosing $t=T$ yields a convergence rate of $O(\lambda_* \text{poly}(T))$, which can be seen as a constant in terms of $T$. As mentioned in the 'Weaknesses' part, a bound linear in $T$ is pretty bad in the standard online learning convention. Theorem 3.1 exhibits an exponential rate only when $t$ can be significantly larger than the period length. Can the authors give some further explanations on this point? In the end, I am very curious about whether the method in this paper, which models the learning algorithms as dynamical systems, can be applied to the constrained case, where the projection operations may bring some unique challenges to the modeling issue. Can the authors give some further explanations on this point? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Please mainly refer to the 'Weaknesses' part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful reading, supportive comments, and helpful suggestions in improving the paper. We will clarify the issues in revision. Please see our itemized responses below: Questions part: 1. What is the range of $t$ in Theorem 3.1 ? Here $t$ denotes the number of rounds that players have played in the game, thus $t$ takes value in range $[0,+\infty)$ and $T$ denote the length of the period, which can be thought as a constant for a given periodic game. In Theorem 3.1 we provide a $\mathcal{O}( (\lambda_*)^{t/T} \cdot \text{Poly}(t) )$ with $ \lambda_* <1$ bound on the distance between the current strategy and equilibrium. Therefore, when two players use extra-gradient, both players' current strategies will converge to equilibrium with an exponential rate. **** 2. Whether the method in this paper, which models the learning algorithms as dynamical systems, can be applied to the constrained case ? Learning algorithms in which there are constraints (like simplex constraints in games) can be analyzed using techniques from dynamical systems (e.g., see Last-Iterate Convergence: Zero-Sum Games and Constrained Min-Max Optimization, Daskalakis et al). As long as the NE is in the interior of the constrained set, the same techniques as used in this paper are applicable. The main challenge occurs when the NE is on the boundary of the constrained set. Our techniques can be applied in the constrained case as long as the constrained set can be expressed as a polytope (linear equalities and inequalities). In the more general case in which the set is an arbitrary convex set, more involved techniques are needed and further assumptions (assumptions on the curvature of the constrained set, see for example analysis of manifold gradient descent in First-order Methods Almost Always Avoid Saddle Points Lee et al.). This is an interesting open question for future investigation, though we believe similar results will occur. There are several techniques from differential geometry that deal with constraints (see for example analysis of manifold gradient descent in First-order Methods Almost Always Avoid Saddle Points Lee et al. which expresses manifold gradient descent as a dynamical system). As long as the constrained set is a smooth compact submanifold, one can use the projection operator and Riemannian gradient to express the learning dynamics as a dynamical system. Our techniques can deal with polytope constraints, it is very interesting future question to generalize these results to more general constrained convex sets. **** Weaknesses part: Thank you for your helpful comments. 4. Notion of $T$ In the new version of the paper we will use $\mathcal {T}$ to represent the length of period in a periodic game. **** 5. Definition of $\ker(\cdot)$ We will add a formal definition of $\ker(\cdot)$ in line 98 in the paper. Here $\ker(A) = \\{ x \in \mathbb{R}^m | Ax = 0 \\}$. **** 6. Inaccurate reference Thank you for your pointing out this issue. We apologize for our inattention. In the new version we will refer [33] in line 120 to ensure more accurate credits. **** 7. Question about $A_{t+1}x_{t+1}$ in negative momentum method. Here we are using the $\textbf{alternating}$ negative momentum method studied in (Gidel et al. Negative Momentum for Improved Game Dynamics) instead of the usual $\textbf{simultaneous}$ setting. In the alternating setting, at round $t$, the $y$-player first chooses his strategy $y_t$ based on the payoff caused by $A^{\top}\_{t-1}x\_{t-1}$, and then the $x$-player choose her strategy $x_t$ based on the payoff caused by $A_{t}y_{t}$. In the stationary game setting, (Gidel et al. ) proved that the simultaneous negative momentum method will lead to an exponential divergence rate, while the alternating negative momentum method will converge. Since our focus in this paper is on determining which algorithm will converge in a time-varying game, we only consider the alternating setting. **** Typos will be corrected in the new version of the paper, thank you for pointing them out. --- Rebuttal Comment 1.1: Title: Thanks for the explanations Comment: Thanks for the detailed feedback, which has fully answered my questions. --- Reply to Comment 1.1.1: Comment: You are very welcome! Thanks a lot for reading and agreeing with the response.
null
null
null
null
null
null
Schema-learning and rebinding as mechanisms of in-context learning and emergence
Accept (spotlight)
Summary: The authors attempt to analyze the reason for the success of In-context learning (ICL) for few-shot learning regimes. They apply ICL to clone-structured causal graphs (CSCGs) that can be used to interpret how ICL works in LLM. The CSCG is constructed as causal graph model where for the task of next token sequence prediction. The authors use CSCGs as interpertable language models to which they apply ICL to analyze key-properties of the learning mechanism. The authors go on to show how CSCG can perform similarly to ICL tasks designed for Transformer models and then show similar properties of CSCG to Transformer models (such as over-parameterization improves performance). The insights can help design new models. Strengths: 1. The authors use CSCG an interpretable model to study a learning method (ICL) that is of high importance to the research community. 2. The experiments are thorough, and the authors show interesting capabilities of CSCG that they compare to a Transformer model. Weaknesses: 1. It is a difficult paper to evaluate as the authors evaluate CSCG ability on standard ICL benchmarks such as 4.3 Dax test and the use of CSCG as a proxy to study ICL, e.g. 4.2 "Emergence". Expanding on my point, the authors choose to evaluate CSCG as a direct replacement to LLM (which does not appear to be either their premise or motivation of the work). 2. Related to 1.; the results and figures, lack conclusions. For example, I would expect to know how / what the experiment explains for how ICL works. An example where this is done successfully is Figure 5. `In-context accuracy (with standard errors) per task on the two LIALT test sets: for each task, overparametrization improves performance.`, but is not present in other figures or sections. 3. The jump on conclusions from a Transformer model to a CSCG is quite large. As such it appears the authors are under-delivering on their original premise, while the paper is of great interest without making the reference. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Please see weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the review for distilling the essential concepts we have tried to convey in the paper. Our goal here is indeed to elucidate a general framework for in-context learning behavior, leveraging the interpretability of CSCGs. A mechanistic understanding of ICL in transformers is still a work in progress (with several exciting works that we have referenced); we hope that this paper will spur further work in that direction. We shall emphasize this, and the current limitations of CSCGs as a direct alternative to LLMs, in the discussion section. > I would expect to know how / what the experiment explains for how ICL works. An example where this is done successfully is Figure 5, but is not present in other figures or sections. Thanks for the comment regarding clarity; we will improve the figure captions to make this more explicit. * In the GINC experiment, Section 4.1, we show that ICL is driven by (a) learning ​​a transition graph where the hidden variables are clustered by concepts (see Fig 3A) and (b) successful retrieval of the shared latent concept between the prompt and the model. In the first figure in our one-page PDF supplement to the global response, we illustrate the schema retrieval process, and the role played by increasing prompt length. * In the LIALT experiment, Section 4.2, we show that ICL requires — in addition to (a) and (b) — also (c) rebinding of novel tokens not seen during training to appropriate slots. Both Figs. 3B and 5[left] additionally show that model size drives retrieval and ICL capabilities. The second figure in our one-page supplement to the global response illustrates the schema retrieval and rebinding process, again leveraging the interpretability of CSCGs. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their effort to perform the additional experiments for the figures. I have read their response and I will be keeping my score.
Summary: This paper implements Cloned Structured Causal Graphs (CSCG), a model that was previously used in Neuroscience to explain Hippocampal cognitive maps, on language tasks that require in-context learning (ICL). They show the success of CSCG's across a variety of language benchmarks and come up with a theory on the mechanisms behind ICL and validate this theory using the results from ICL in CSCG's. The properties they claim are necessary for ICL include context-based separation, context-based merging for transitive generalization, and the presence of general abstract schema that can implement certain abstract content-independent operations that can also be rebound as necessary. Targeted experiments are conducted to illustrate each of these properties with a handful of relevant benchmarks, both old and new - GINC (old), LIALT (new), and PreCo (old). Strengths: * I have not seen CSCG's being used for the language domain, so I think that is a novel contribution inofitself. * The properties that explain the mechanisms of ICL are very interesting and, although some I think can only be uniquely revealed through the architecture of CSCG's, seem like architecture-agnostic generalizable principles that add to the interpretability literature on ICL. * Experiments are targeted, well-organized, and thorough. The contribution of the LIALT dataset also can be a valuable test-bed for ICL abilities. Weaknesses: The paper makes some (though I think reasonable) conjectures on how these properties are implemented in transformers. However, there isn't any empirical evidence to verify these claims. I don't think this is a strong weakness, though, because the empirical evidence from CSCG is a valid contribution by itself and this can be the topic of future work. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: What was the process of finding and identifying the schemas shown in Figure 2? Is there a principled way to detect abstract generalizable schemas learned during training? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: I don't think the authors have included explicit sections discussing the limitations of using CSCG's in language modeling or on the broader societal impacts of this work. Would appreciate including this in a rebuttal, possibly with the extra page that is given to authors during the rebuttal phase. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are heartened by your emphasis on the strengths of the CSCG approach and the novelty of its application in this setting. We are similarly excited to leverage the interpretability of CSCGs to elucidate by analogy a general framework for in-context learning behavior. You might also find interesting the additional figures in our one-page PDF supplement to the global response, illustrating the process of schema retrieval and rebinding. > What was the process of finding and identifying the schemas shown in Figure 2? Fig. 2 aims at giving a high-level intuition of CSCG circuits to the reader: we selected simple sequence-to-sequence algorithms and manually designed the schemas. In Figs. 4 and 9, we show that we are able to extract similar schemas from a trained CSCG. > Is there a principled way to detect abstract generalizable schemas learned during training? Each schema corresponds to a cluster of latent states with a specific connectivity pattern. Performing community detection on the learned CSCG graph is one simple approach to discovering schemas. > I don't think the authors have included explicit sections discussing the limitations of using CSCG's in language modeling or on the broader societal impacts of this work. Would appreciate including this in a rebuttal, possibly with the extra page that is given to authors during the rebuttal phase. We appreciate that you consider this of significant social impact to warrant discussion. In addition to elucidating the mechanics of ICL, we hope also that it serves as an exemplar in motivating the pursuit of interpretable models, especially as LLMs proliferate in application. Such interpretability will plausibly help tame biases and enforce guardrails for safety. We shall emphasize this, and the challenges of CSCGs for language modeling, in the discussion section. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I have read it and am happy to keep my rating.
Summary: The paper aims to replicate the in-context learning (ICL) phenomenon in large language models (LLM) with clone-structured casual graphs (CSCG), which is roughly trying to learn hidden states in PODMP such that the transition matrix is invariant new environments, and only the emission matrix needs re-learning. CSCG has the properties of context-separation, transitive generalization, schema-formation, and refining, which can be used to explain ICL behaviors. Results on three benchmarks show CSCG can match some LLM ICL behaviors. Strengths: - ICL is important and emerging, and its understanding is timely. The paper contributes to a novel view into ICL with some interesting constructions and experiments. - The context-dependent latent representation and transitive generalization make sense. - Experiments seem interesting and help replicate some ICL behaviors on toy datasets, e.g. accuracy of ICL depends on overparametrization. Weaknesses: - CSCG is unfamiliar to most readers and there could be some confusions that hinder understanding the paper (see questions). Importantly, **I do not understand how is the transition learned in the first space**, even before any "rebinding". And what is even the training data for each experiment?? - The setup of CSCG is still far from LLM ICL, and there should be acknowledgements of such limitations if the goal is to explain LLM ICL via CSCG. For example, learning and architecture, training task, test task... I confess I do not understand CSCG fully. It might be a super smart and cool idea with a lot of potential, and I'm willing to raise my score these are addressed. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - How is learning (of transition matrix) done? The paper seems to only talk about re-binding for learning the emission matrix... - Why transition T is assumed to be fixed, and what kinds of train-test split do we assume this? (LLM might be hard to assume this?) - Is the latent symbol space for z huge? Is "clone" just defined as #latent_symbols/#observed_symbols? Also, why not several latent tokens between observation tokens? Might make symbol space exponentially more efficient and more like language compositionality? - Figure 3, is accuracy kind of low? does it really match Transformer models? - Is this whole CSCG thing kinda like a latent automation? Do you think LLM is possible to recover such symbolic structures? Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: Did not see any limitation section in the paper, but there should be some (see weaknesses). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Indeed, it is the combination of context-dependent latent representation and transitive generalization capability that drives the power of CSCGs. Since the thrust of this paper is on in-context learning behavior, page constraints limit us from elaborating on CSCG details. We are happy to add to the appendix a section elaborating on these details, if that might be helpful. Towards the end of this response, we try to clarify the aspects you have asked about. Our emphasis in this paper has been on leveraging the interpretability of the CSCG model for elucidating in-context learning behavior. While we do not directly interpret ICL behavior in LLM models, we reason by analogy, and hope that this paper will spur more work in that direction. A mechanistic understanding of ICL in transformers is still a work in progress (with several exciting works that we have referenced). We hope also that the interpretability of CSCGs will motivate such an emphasis in LLMs (and sequence models more generally) as they proliferate in application. We will emphasize these aspects in the discussion section. > Also, why not several latent tokens between observation tokens? Might make symbol space exponentially more efficient and more like language compositionality? You seem to be suggesting having a CSCG with factorized latent space: this is a relevant modification to the model. We agree that such a structure can allow better compositionality while enabling scalability. We hope to explore such modifications in our future research. > Figure 3, is accuracy kind of low? Does it really match Transformer models? On the GINC dataset, CSCG achieves an in-context accuracy above 90% for context-length of 5 and above 95% for context lengths of 8 or 10. These results are higher than the ones reported for Transformers in [2]-which introduced the GINC dataset–and comparable to the best LSTMs reported by the authors. We acknowledge that further tuning might boost transformers’ performance; we are only conveying that these high accuracies illustrate that CSCGs are good at context-specific retrieval (which is essentially what the GINC dataset is probing). > Is this whole CSCG thing kinda like a latent automation? Do you think LLM is possible to recover such symbolic structures? You are right, one could think of the action-conditioned transition matrix as a stochastic generalization of a finite deterministic automaton in the latent space. Some recent work [4] has demonstrated how LLMs might be able to learn such automata. The advantage of CSCGs is how easily interpretable the learned schemas are, as graphs. Refer the second figure in our supplemental PDF to the global response, for an example. CSCG details: * Train & test data: For the GINC dataset, we use the same training and in-context test set as [2], which is available on GitHub [3]. For the LIALT dataset, we describe the structure of the train and test set (with examples) in fig. 4A, and lines 174-184 and lines 185-193 of the text. One can think of our training sets as analogous to the pretraining datasets used in LLMs, and of our in-context test set as the prompts used to test LLMs for in-context learning. * Learning of the transition matrix: Learning in CSCGs uses the Expectation-Maximization algorithm to maximize the log-likelihood of the CSCG on a sequence of symbols. It is therefore similar to HMMs training, with additional computational benefits due to the clone structure. Note that during CSCG training, the emission matrix (which specifies the latent spaces associated with an observation) is fixed, and only the transition matrix is learned. The original CSCG paper [1] has more details. * Freezing of the transition matrix: The transition matrix T is only learned during the “training” phase, which we describe in line 155 (for the GINC dataset) and in line 194 (for the LIALT dataset). We demonstrate that ICL behavior in CSCGs, which emulate “algorithms” on novel tokens, can be implemented by (a) freezing the learned transition matrix (the schemas, corresponding to algorithms) at test time, and (b) only modifying the emission matrix through the rebinding algorithm. * Latent space: “Clones” correspond to different latent states which emit the same observation. For a given dataset, the latent space that needs to be modeled is at least as big as the number N of distinct contexts necessary to correctly predict the next token. In line 196, we introduce an “overallocation ratio” to parameterize the CSCG capacity proportionally to N. In practice, our latent space (including the overallocation) contains a few thousand states. References: [1] Dileep George et al. “Clone-structured graph representations enable flexible learning and vicarious evaluation of cognitive maps”. In: Nature communications 12.1 (2021), p. 2392. [2] Sang Michael Xie et al. “An explanation of in-context learning as implicit bayesian inference”. In: arXiv preprint arXiv:2111.02080 (2021) [3] https://github.com/p-lambda/incontext-learning/blob/main/data/GINC_trans0.1_start10.0_nsymbols50_nvalues10_nslots10_vic0.9_nhmms10 [4] Bingbin Liu et al. “Transformers learn shortcuts to automata”. In: arXiv preprint arXiv:2210.10749 (2022) --- Rebuttal Comment 1.1: Title: Thanks Comment: I've increased my score from 5 to 6, but I hope authors work on clarity during revision to make the interesting topic more accessible and understandable!
Summary: The authors propose the clone-structured causal graph (CSCG) with rebinding as a model of in-context learning in language models. The authors conduct several experiments showing that CSCGs can learn latent graphs corresponding to meaningful concepts seen in the training data, while also generalizing to instantiations of those concepts containing novel tokens seen at test time. Strengths: The presentation of the CSCG as a model of in-context learning is new to me and very interesting, addressing a hot topic in LLMs that is relevant to a large portion of the community. The experiments show that CSCGs can be learned in a way that leads to meaningful concepts, as well as the fact that overparameterization is useful for CSCGs as with typical LLMs. Finally, the experiments show that CSCGs can perform fast binding of novel tokens to previously-seen structures in a "dax test", which is an important capability the authors argue is not adequately explained by existing frameworks. Weaknesses: Key claims about interpretability are not substantiated. In particular, the abstract and intro clearly emphasize the importance of interpretability in models of language/ICL, but the experiments do not meaningfully explore how CSCGs are more interpretable than existing LLMs. Some experiments are somewhat superficial or small scale, particularly 4.3, where only a few qualitative examples are shared, without any quantitative analysis showing that the results are representative. The largest datasets are orders of magnitude smaller than the datasets used by real LLMs. This shortcoming isn't absolutely critical, but of course the results would be much more convincing if ICL phenomena could be shown at something closer to real-world scale. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: How does the size of the latent graph for e.g. list reversal scale with list size? What is the computational cost of fitting a CSCG with EM? PreCo is many orders of magnitude smaller than typical pre-training corpora. Is CSCG plausible even for very large corpora? Are the results in Figure 6 cherry picked? How often is rebinding + MAP inference successful? Is there a typo in the emission matrix/clone structure definition? The number of columns of the emission matrix is the size of the observation space when it is defined, but it is the size of the latent space when the clone structure condition is defined later in 2.1. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors should add some discussion about the limitations of CSCGs for modeling the behaviors of real-world LLMs, particularly whatever computational challenges may exist in scaling CSCG-like models to billions or trillions of tokens. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are glad that you consider novel and interesting our application of CSCGs towards a framework for understanding ICL. We will elaborate on the challenges of scaling CSCGs to large datasets in the discussion section, and some potential directions for progress. To recap how we leverage CSCG interpretability in the paper: in Fig. 3A, we extract the CSCG transition graph and observe five clusters – corresponding to each of the concepts in the GINC dataset. Similarly, in Fig. 4B, we extract interpretable circuits from a CSCG trained on the LIALT dataset: these circuits explicitly represent sequences of different lengths (relating also to the question of the latent graph size). We also present two additional figures in the one page PDF supplement to the global response (which we will include in the final paper), illustrating: * How increasing prompt length contributes to schema retrieval * The step-by-step functioning of the retrieval and rebinding process, on LIALT prompts Finally, please note that some previous work on CSCGs [1] has also exploited their interpretable structure to match with cognitive maps in the hippocampus. Cumulatively, we hope this buttresses our claims regarding the interpretability of CSCGs. > How does the size of the latent graph for e.g. list reversal scale with list size? The size of the latent graph for the reversal algorithm, as measured by the number of nodes, scales (a) as O(N) when the list size is fixed to N (b) in the worst case, as O(N^2) when the list sizes can vary between 1 and N. We illustrate this scaling (a) conceptually in Figs. 2E and 2F and (b) empirically by extracting the latent graph from a trained CSCG in Fig. 4B: the unrolled view illustrates how lists of different sizes are represented in the latent graph. > What is the computational cost of fitting a CSCG with EM? [...] Is CSCG plausible even for very large corpora? For a CSCG with M clones per symbol trained on a sequence of length N, each EM iteration has a computational cost in O(N M^2). Note that the size of the latent space is H = M*E, where E is the total number of distinct symbols. It is possible to exploit the sparsity structure of the problem (as well as other elements, such as factorizing the transition matrix, parallelizing the computation of the EM steps, etc) to scale the training of CSCGs to large language datasets. > Are the results in Figure 6 cherry picked? How often is rebinding + MAP inference successful? We selected the examples in Figure 6 so that, when filling in the words, the context makes it clear what the missing word is. The same is necessary for humans; it is hard to rebind correctly with an ambiguous sentence such as “I went to the dax this morning”. When there is no ambiguity about the missing word, we found MAP inference + rebinding to be successful. > Is there a typo in the emission matrix/clone structure definition? The number of columns of the emission matrix is the size of the observation space when it is defined, but it is the size of the latent space when the clone structure condition is defined later in 2.1. Thank you for pointing out a typo on line 64: we have inverted the rows and columns of the emission matrix. We will fix it. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response. If the proposed revisions are included in the final paper, I am happy to keep my score.
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their time and thoughtful comments. As the reviews have identified, we have used CSCGs as a sequence model and leveraged their interpretability to deconstruct in-context learning (ICL) behavior into a combination of schema-learning (at training time) and schema-retrieval + rebinding (at prompting time). We hope our work demystifies by analogy the surprising ICL behavior observed in LLMs, showcases avenues for further research on ICL capabilities, and provides impetus for interpretable methods. In our one page supplement, we present two additional figures that illustrate the mechanics of the schema retrieval and rebinding process, and also highlight the interpretability of CSCGs. We shall make use of the additional page for the camera-ready version of the paper to incorporate these. Pdf: /pdf/0ab7e5efa8ed1a6e03fe92144d5b73c7b2bd3aa0.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Time-uniform confidence bands for the CDF under nonstationarity
Accept (poster)
Summary: The authors propose a method for constructing time-uniform and value-uniform bounds on the CDF of the averaged historical distribution of a real valued random process. They propose a design principle AVAST (always valid and sometimes trivial), implying that their bounds always have coverage, but may converge to 0 slowly. The authors build upon the work of [Howard and Ramdas 2022], extending those results to non-stationary processes, and allowing for counterfactual estimation. Strengths: 1. The authors provide a new algorithm for constructing time and value uniform bounds on the CDF of the averaged historical distribution of a real valued random process in the non-stationary case. The theoretical guarantees are cleanly provided, and the proofs appear to be correct. 2. The authors extend these results to importance-weighted random variables. 3. The authors simulate their algorithm on several synthetic examples, showcasing its adaptive performance. Weaknesses: The one point that is not clear to me is the instance-adaptivity of Theorem 3.3. Is this a constant bound $\xi_t$ constant for all $t$, i.e. $\xi_t \le \xi$ for all $t$? If not, how is it that the bound is time-uniform while having no dependence on previous $\xi_t$? It seems as though if $X_1,\dots,X_{100}$ were i.i.d. $U[0,\epsilon]$, and $X_{101}$ is independent $U[0,1]$, then the confidence interval width $U_{101}(\epsilon/2)-L_{101}(\epsilon/2)$ would depend critically on $\epsilon$. Simulations would also be helpful to clarify this point. Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: See weaknesses, the instance-adaptivity of Theorem 3.3 is unclear. This work seems polished and well-written aside from this, but this result seems like a critical part of the paper. I am leaving a score of weak reject, but if this point is clarified I would be happy to see this paper accepted. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 4 excellent Contribution: 3 good Limitations: Line 79: leveraging *the* monotonicity line 137: two commas after e.g. Line 427: For a fixed "bet" $\lambda$. It is not explained why $\lambda$ should be treated as a bet. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The one point that is not clear to me is the instance-adaptivity of Theorem 3.3. Thanks for pointing this out, we'll add more commentary, since this is an important and desirable property of the technique which should be highlighted. The width guarantee is dependent upon the smoothness of the averaged historical distribution up to time t, which can vary with t (as the technique adapts to the [unknown] smoothness at each timestep). In particular the distributions can become polynomial (in t) less smooth over time and we still enjoy a 1/sqrt(t) shrinking width up to log(t) factors. It is only when distributions are exponentially less smooth with time that our width becomes constant, which (not coincidentally) is what the impossibility construction of Block et. al. does. > For a fixed "bet" $\lambda$. It is not explained why $\lambda$ should be treated as a bet. Thanks for pointing this out. When we moved material from the main text to the appendix, we lost the definition of $S_t$ (as another reviewer pointed out). Defining that will clarify. Furthermore the word "bet" is a term of art from the confidence sequence literature which is not helpful here, so we'll switch to using the word "parameter". --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for the clarification; I now realize that I had misread notation, and that $\mathbb{P}_t$ is the $\textit{averaged historical distribution}$, as opposed to the distribution of $X_t$ itself. My apologies. This seems to be somewhat confusing notationally, as the CDF is already defined in terms of the averaged historical distribution. In light of this, I am increasing my score to a 6; I think this paper should be accepted, but will not champion it. The contribution of the paper is clear, the problem it tackles is interesting, and the theoretical results it provides are useful, but the writing and clarity could definitely be improved (as shown by the questions of the other reviewers).
Summary: The paper considers the problem of estimating the CDF of a sequence of random variables obtained sequentially according to some distributions. Many classical results are known for this problem when the random variable sequence is obtained in an i.i.d. manner following a certain distribution, along with its confidence bands. On the other hand, some impossibility results are known for the case where the random variables are dependent on each other, and their confidence bands are not sufficiently known. To address this issue, the paper constructs confidence bands for CDFs with running averaged conditional distributions that are uniform in time and value. The extension to importance-weighted random variables is also discussed. Strengths: - The paper shows how to construct confidence bands in the setting where there is dependence in the sequence of random variables obtained, an important problem in practical applications. This result is further extended to the importance-weighted setting, which is frequently encountered in many online decision making problems. - Through a number of experimental results, the paper shows that results that are consistent with the theoretical results are obtained. Weaknesses: - The contents of the Introduction seem to be insufficient, because it is not clear what kind of problem is being solved just by following line 12 to line 27, and then the explanation of contirbutions is given immediately after that. Section 5 (Related Work) seems to be useful for understanding the problem the authors are trying to solve, so it would be desirable to include it earlier. Even after adding it, the explanation of background knowledge still seems insufficient, so it seems desirable to use the remaining pages to provide more explanation in the current Introduction. - Although the comparison with existing results is clearly stated, the reviewer could not tell from the text what the major technical difficulties of the problems are. Many of the proofs and contributions are included in the appendix, and it would be desirable to have an explanation of what technical difficulties were solved to obtain the main results, even if they are only sketches. Minor issues and typos: - The horizontal line labels in Figures 1 to 7 are too small to see. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - As noted in Weaknesses, there are few specific descriptions of technical difficulties, and reviewers expect authors to address them. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The contents of the Introduction seem to be insufficient Great point, the current introduction is abstract and a concrete example will be helpful. We will discuss the scenario of continuous monitoring for software regression detection, which combines the desire to do inference beyond the mean (hence the entire CDF) with a desire to detect changes as rapidly as possible in a nonstationary environment.[[1]](https://arxiv.org/abs/2205.14762) > Many of the proofs and contributions are included in the appendix, and it would be desirable to have an explanation of what technical difficulties were solved to obtain the main results, even if they are only sketches. Thanks for pointing this out. We'll provide short sketches of the proofs within the main text. For example, for the first two theorems: * Theorem 3.1 follows from the guarantees of the confidence sequences at each fixed value, the monotonicity of the CDF, and a union bound. * Theorem 3.3 follows by noting the bias of the discretization is bounded by the smoothness, and then propagating that through a closed-form confidence boundary. > The horizontal line labels in Figures 1 to 7 are too small to see. Thanks for pointing this out. We will increase all font sizes. --- Rebuttal Comment 1.1: Comment: Thank you for your reply. I have read the rebuttal by the authors, and I will keep the current rating.
Summary: This paper proposes a new construction of time-uniform confidence bands for CDF, where the standard toolkit such as Glivenko—Cantelli cannot be applied due to nonstationarity. At a high level, the proposed method combines confidence sequence for a certain subset of values via a union bound. Despite the impossibility result from the adversarial online learning scenario, the proposed construction of confidence bands is shown to have vanishing width for a class of smooth distributions (Theorem 3.3). This result holds for unbounded random variables (Section 3.3) as well as under a known distribution shift case (Section 3.4). The simulations demonstrate that the proposed algorithms can be effective. Strengths: The considered problem of constructing confidence bands for CDFs under nonstationarity is of great practical importance. The proposed technique seems to be new and interesting. The theoretical guarantee in Theorem 3.3 is insightful as it characterizes the adaptivity. Weaknesses: - While I believe that the proposed method is new and novel, the writing is hard to follow, which can be much improved. - The paper is not very self-contained. For example, Table 1 needs to be elaborated further. For example, there is no description on “$w_{\max}$-free” in the main text or in the caption. - The figures can be revised further. Legends and labels are too small and not very visible. And it might be better to put Figures 2-5 after Section 4 as these are never referred before the experimental section. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: Most of my questions are around Algorithm 1, which seems to contain the key idea of the proposed method. The authors should revise the manuscript so that the key ideas become clear. - Is i.i.d.ness is assumed throughout Section 3? It is never explicit except in line 80 before Section 3.4. - In Algorithm 1, aren’t the functions $\epsilon(d)$ and $\Xi_t$ also “inputs” (or “hyperparameters”) to the algorithm? If so, I think it’d better to put it in the algorithm, not in the caption. Further, I think that making Algorithm 1 illustrative by giving a concrete example of $\epsilon$ and $\Xi_t$ may improve readability. Since the paper is extremely notation-heavy, it is not very clear how to parse the algorithm at that abstract level. - In Algorithm 1, most of the readers will be confused with $W_{1:t}$, as it hasn’t been defined by then. I can search over the text and find it in Section 3.4 that it means the importance weight, but it only distracts a reader if it’s put in Algorithm 1. - In the caption of Figure 1, it is stated that “The algorithm searches over all d to optimize the overall bound via a provably correct early termination criterion”. What do you mean by the “provably correct early termination criterion”? In Algorithm 1, where is this corresponded? And where do you provide a provably correctness? As the resulting estimator seems new and interesting, I am inclined to vote for accept, but only provided that the writing is greatly improved in the revision. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 1 poor Contribution: 3 good Limitations: Limitations are not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The paper is not very self-contained. For example, Table 1 needs to be elaborated further. For example, there is no description on &ldquo;$w_{\max}$-free&rdquo; in the main text or in the caption. We will expand section 5 to explicitly define the six properties we identify in the columns of Table 1, including what "unbounded importance weights" refers to. > The figures can be revised further. Legends and labels are too small and not very visible. Thanks for pointing this out. We will increase the sizes of all fonts. > Is i.i.d.ness is assumed throughout Section 3? It is never explicit except in line 80 before Section 3.4. The goal of our paper is to dispense with the i.i.d. assumption, so we do not assume it for any of our results. We will be more explicit about this in the introduction. The goal of line 80 was only to point out that a previous proof method used in the i.i.d. setting appears to break down, which is why we use a different technique. > If so, I think it’d better to put it in the algorithm, not in the caption. Good suggestion, we'll make this change. > In Algorithm 1, most of the readers will be confused with, as it hasn’t been defined by then. We will remove it. > In the caption of Figure 1, it is stated that “The algorithm searches over all d to optimize the overall bound via a provably correct early termination criterion”. What do you mean by the “provably correct early termination criterion”? In Algorithm 1, where is this corresponded? And where do you provide a provably correctness? Thanks for pointing this out. We will include an explicit proof. The proof sketch is as follows: once the termination criterion is met, any subsequent (deeper in the tree) bounds are evaluated on the same statistics but with a worse confidence level, and hence are dominated. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response and read other reviews. Though I believe that the paper has an interesting contribution, I think the manuscript needs a major rewriting which will require another round of review. Hence I will keep the score, and I hope the reviewer to revise the whole manuscript carefully so that the problem setting, assumptions, and the description of the algorithm clearer to make the manuscript well-received to a broader audience.
Summary: The paper presents the time and value uniform bounds on the CDF of the running averaged conditional distribution of a real valued random variable. The new bounds do not require iid setting and always achieve non-asymptotic coverage. The converge speed depends on the smoothness of distribution against the reference distribution, which it is the uniform distribution. Following Howard et al (2021), a confidence sequence of a discrete time random process was employed, and a time uniform coverage property was a key to construct the uniform bounds. Authors established the bounds for unit interval random variables, extended to real random variables then extended to importance weighted random variables. Algorithms for lower/upper bounds and theoretical properties are presented and simulation studies are provided. Strengths: Reliable bounds on the CDF for dependent data setting is valuable as iid setting does not always hold. In order to deal with dependent data, the smoothness of distribution to a reference measure is assumed and the time-value uniform bounds are presented. It is examined numerically for both iid samples and nonstationary data. Weaknesses: This is not my research area and, I found the paper was hard to read and follow. I have an impression that this paper is written assuming that readers have some knowledge and, detailed explanation would be helpful. i) Confidence sequences Lambda and Xi have very important roles but I can’t grab what they are from the Algorithms 1 and 2 and the justification. Although the confidence sequence review is given in Appendix A, it is not straight forward to make connections to Alg 1 and the rest of paper. ii) Consequently, it was difficult to follow proofs without fully understanding the confidence sequences and their theoretical properties. Where are the Eqn 12 from? Could authors derive this from the assumptions and properties of the confidence sequence? iii) Authors did extensive simulation studies with iid samples although the highlight of result is dropping iid assumption. iv) Some abbreviations were not defined. For example, NSM, DKW and DDRM. v) Careful edit seems to be needed. For example, there are Eqn (10)s, S_t is not defined in Appendix A. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please see the weakness section. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Authors addressed the limitation of the new bounds. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > This is not my research area and, I found the paper was hard to read and follow. Thanks for the honest feedback. We are trying to promote confidence sequences within the machine learning community (they are better known within the statistics literature) and it is challenging to properly calibrate the exposition, but we feel the effort is worthwhile. > i) Confidence sequences Lambda and Xi have very important roles but I can’t grab what they are from the Algorithms 1 and 2 and the justification. Although the confidence sequence review is given in Appendix A, it is not straight forward to make connections to Alg 1 and the rest of paper. We will expand Appendix A to better introduce confidence sequences. We will state the definition of a fixed-sample confidence interval and then make the extension to a confidence sequence as a sequence of confidence intervals satisfying a coverage guarantee which holds uniformly over time. We hope this will make the paper more self-contained. > iv) Some abbreviations were not defined. For example, NSM, DKW and DDRM. Thanks for pointing this out. * We'll drop the usage of the acronym NSM entirely and use "construction" instead (NSM is a term of art from the stats community which is not helpful here.) * DKW is defined on line 55. * DDRM is itself the name of a technique; when first mentioned in the text a reference is provided, but the potential first encounter for the reader is in the caption of a figure 5, so we'll put a reference on that usage. > v) Careful edit seems to be needed. For example, there are Eqn (10)s, S_t is not defined in Appendix A. Thanks for pointing that out, when we moved some material to the appendix we lost some definitions, which we will restore. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response to my comments and other reviewer's comments. The paper has potential and will be in a better shape after accommodating comments. However, I think it needs a major revision and I will keep the score. --- Reply to Comment 1.1.1: Title: Actionable ideas? Comment: > However, I think it needs a major revision and I will keep the score. Thanks for your honest feedback. It doesn't sound like you object on a technical level, but rather to the presentation. Do you have some specific suggestions on how we can better describe the concepts to this audience given the page limit?
Rebuttal 1: Rebuttal: We thank for reviewers for helping to improve the paper. To improve the intelligibility of Algorithm 1, we propose adding the attached figure. Further, in the appendix, we will manually describe the execution of Algorithm 1 step-by-step until termination on a simple dataset of five items. Pdf: /pdf/a5844389e7640c3c730c45fc2e402349138053d0.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Im-Promptu: In-Context Composition from Image Prompts
Accept (poster)
Summary: This paper presents a comprehensive study of in-context learning of compositionality in images. It introduces a simple benchmark of compositional language-driven visual transformations and explores under what conditions a model can perform in-context analogies from image pairs. Strengths: Very interesting study, with relevant conclusions that can be easily turned into hypotheses for examining how one can achieve in-context analogical reasoning in a more real-world, less synthetic context. The scope of the study is well delineated, and the hypotheses that are tested are very clear. This is a very important topic beyond the narrow setting of 'can we do visual analogies', because of the widespread believe that analogy making is a great proxy for broader understanding of the visual world. This study further reinforces recent observations from a number of researchers that object or attribute-centric representations are very effective at enabling higher level visual reasoning and deserve more attention (pun intended), and confirms that cross-attention is a strong contender for enabling compositional learning beyond just a single modality. Weaknesses: One could argue that results obtained on synthetic data always come with the question of whether the findings will transfer to more real-world settings. In this case, the diversity of synthetic data sources partially addresses the concern. Nits: - Fig 1: girafe has two 'f' in English. - Suggest shortening refs [17] and [19] if permitted byt the NeurIPS style guide. A whole printed page for a single reference is a bit over the top. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: None. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Limitations of the work are adequately spelled out. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and positive feedback. --- Rebuttal Comment 1.1: Comment: ack on rebuttal
Summary: This paper focuses on the problem of using composable elements of visual modality to perform in-context referring for analogical reasoning like LLMs do. Firstly, the authors constructed three benchmarks to evaluate the generalization capacities of visual in-context learner. Then, the authors unified the formula of in-context analogy learning, and designed a framework called Im-Promptu. Different levels of agents, ranging from simple pixel-space rules to object-centric learners are trained for exploring the granularity of composition in visual modality. Lastly, the paper pointed out the characteristics of different degrees of compositionality via experiments. Strengths: 1. The paper proposes an interesting topic of considering the in-context composition learning from visual prompts. Similar to the composition skills of LLMs, visual generative models’ compositionality is largely under explored. The motivation of this paper is meaningful and insightful. 2. This paper is of great novelty. First, it formulates the notion of analogy-based in-context learner and proposed a novel training framework called Im-Promptu based on that. What’s more, the paper provides a detailed and insightful analysis of different granularities’ influence on the compositionality and whether analogical reasoning enables in-context generalization. 3. The paper contributes three benchmarks for evaluating the generalization properties of visual in-context learners. Weaknesses: 1. There is no obvious weakness I have found in this paper. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Just one question: The proposed three benchmarks are very different from actual scenes, are there any plans to introduce more real-scene images to the benchmark? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors discussed limitations and there is no potential negative societal impact of their work from my perspective. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and positive feedback. We address the various concerns below. > Just one question: The proposed three benchmarks are very different from actual scenes, are there any plans to introduce more real-scene images to the benchmark? We are experimenting with photorealistic graphics engines that can render textures, near-natural objects, and finer lighting to close the sim-to-real gap. --- Rebuttal Comment 1.1: Comment: Thanks authors for their rebuttal. I will keep my score. Excellent work!
Summary: This work proposes a framework for in-context image generation/composition from image prompts. The in-context learner resembles analogy completion and is simply optimized with a reconstruction objective. Several variants in terms of visual representations (pixels/latent vectors), the compositionality of the vectors (monolithic/object-centric), and the scheme of integrating prompts (cross-attention/simple concatenation) are explored to indicate the key ingredients of a successful visual in-context learning model. In addition, a benchmark consisting of three object/attribute-combinatorial datasets is created to evaluate visual in-context learning tasks. Strengths: + The main idea of introducing in-context inference ability (which is text-free) into the object-centric learning framework is interesting. + The proposed Object-Centric Learner (OCL) is technically sound and the experimental results demonstrate its validity for in-context composition from image prompts. + Extensive evaluations regarding different levels of compositionality and different means of context aggregation are investigated. Weaknesses: - The main concern comes from the confusing contribution clarification of this paper. I do not see which specific/major contributions the authors want to emphasize. Text-free in-context generation compared to text-to-image generation models such as DALL-E? Endowing SLATE with the in-context composition capability by inserting a context encoder transformer? (a) If the answer is the first one, I would say it is over-claimed. Although Figure 6 to some extent showcases that DALL-E requires tedious prompt engineering and some external techniques like inpainting to achieve scene composition, the proposed text-free in-context generation method is not more appealing since 1) it can only be applied in the toy scenario (while DALL-E supports open-vocabulary generations with high fidelity), and 2) one needs to have each component in advance (e.g., an image of a rubber purple cube in the desired position), and this is not feasible in practical scenarios. (b) If the answer is the latter, I would say the claimed contribution is not well supported. The proposed OCL demonstrates its ability of in-context composition from image prompts, but SLATE also supports compositional generation using the concept library built from the training datasets. I think some comparisons and discussions with SLATE would help contextualize this paper. - Overall, the paper is not easy to follow because: 1) there are many long sentences with less specific descriptions, and 2) many technical details of the in-context learner are missing and Section 4 is redundant (e.g., Equations 2 and 3 could be moved to the appendix). In addition, the authors should consider adding some preliminaries of slot attention and SLATE, making the paper more self-contained. - The proposed method is only evaluated on toy synthetic datasets. Although previous related works (e.g., SLATE) generally struggle to generalize to real-world data with diverse scenes, I would like to see some experiments on natural images. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: In terms of the experiments, I have two questions: (a) How to implement the multi-shot in-context generation? (b) In Figure 6, the objects in the generated image seem to be placed in the same position as they are in the image prompt. How is this position-preserving functionality realized by OCL? I would like to see some insights into this from the authors. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and constructive feedback. We respond to the questions and concerns below. > The main concern comes from the confusing contribution clarification of this paper. I do not see which specific/major contributions the authors want to emphasize. Text-free in-context generation compared to text-to-image generation models such as DALL-E? Endowing SLATE with the in-context composition capability by inserting a context encoder transformer? The goal of the work is the latter, i.e., enabling in-context composition for visual agents, with the former being a demonstration of how compositional visual understanding could be useful for downstream tasks. > If the answer is the latter, I would say the claimed contribution is not well supported. The proposed OCL demonstrates its ability of in-context composition from image prompts, but SLATE also supports compositional generation using the concept library built from the training datasets. I think some comparisons and discussions with SLATE would help contextualize this paper. LLMs have the uncanny ability to implicitly compose outputs using human-designed abstractions (words, BPE, sentences). On the other hand, methods like SLATE learn their own abstractions; however, composition follows via explicitly constructed concepts by the human user. Such rules can be tedious to specify as they scale exponentially with the number of stored concepts. Our work then combines the best of both approaches by simultaneously learning the compositional abstractions and the composition process. Moreover, the implication of this goes beyond simply inserting a context encoder over SLATE. While SLATE is a state-of-the-art slot learning method, it does not a priori follow that slot abstractions can enable state-of-the-art in-context compositional generation. Our work rigorously compares slots against the other possible composition choices to test this hypothesis. > If the answer is the first one, I would say it is over-claimed. Although Figure 6 to some extent showcases that DALL-E requires tedious prompt engineering and some external techniques like inpainting to achieve scene composition, the proposed text-free in-context generation method is not more appealing since 1) it can only be applied in the toy scenario (while DALL-E supports open-vocabulary generations with high fidelity) We agree with the reviewer and do not claim to supersede DALL-E at generating open-vocabulary generations. The example shows how in-context visual understanding using the model's own concepts could facilitate relational understanding of the scene in comparison to grounding concepts in text. Future image-generation systems could then use text for high-fidelity generations of individual concepts in conjunction with architectures like OCL to facilitate the understanding of the interplay of these concepts. > The proposed method is only evaluated on toy synthetic datasets. Although previous related works (e.g., SLATE) generally struggle to generalize to real-world data with diverse scenes, I would like to see some experiments on natural images. Please refer to the section “Concern surrounding the absence of natural images” in the global response. > How to implement the multi-shot in-context generation? The context encoder collectively attends to the set of all context examples in the multi-shot case. Simple example encodings (implemented as positional embeddings) help the encoder differentiate between examples. We will add this point to the text in addition to releasing our code, where the implementation will be made much more clear. > In Figure 6, the objects in the generated image seem to be placed in the same position as they are in the image prompt. How is this position-preserving functionality realized by OCL? I would like to see some insights into this from the authors. Note that the OCL is trained on object addition and deletion analogies (see Figure 9 in the supplementary). The image prompt then corresponds to an analogy of simply adding the object at a particular position in the scene. We will add a section in the appendix specifying the algorithm used to generate the prompts and the corresponding compositions. > Many technical details of the in-context learner are missing and Section 4 is redundant (e.g., Equations 2 and 3 could be moved to the appendix). In addition, the authors should consider adding some preliminaries of slot attention and SLATE, making the paper more self-contained. We have added detailed background for slot attention and SLATE in the Appendix (see Section D in Appendix). All technical details have been specified in Table 6 of the Appendix. If the reviewer could be more specific about the details they are looking for, we would be happy to provide the necessary information. --- Rebuttal Comment 1.1: Title: more comments Comment: Thanks authors for their responses. The rebuttal partially addresses my concerns, but I lean towards rejection given that (1) the contribution is not well justified; (2) empowering SLATE with the ability of in-context generation via explicit image prompts is interesting but the technical contribution is limited (as pointed by Reviewer 4dpY); (3) the evaluation on toy datasets is not adequate in my view; (4) the writing and presentation need substantial improvements. Thanks. --- Reply to Comment 1.1.1: Title: Reply to response Comment: Thank you for your constructive feedback. We believe that our work provides a crucial bottom-up understanding of compositional generalization from few-shot examples that can further help understand in-context learning in modalities beyond language. We encourage you to parse through the response to reviewer 4dpy for a detailed enumeration and implications of key contributions. In addition, our experiments also show that "visual analogy" methods pointed out by reviewer 4dpy (Ref [1] and [3]) exhibit limited generalization to higher-order composition prompts. Our datasets, although synthetic, consist of benchmarks of varying complexity and semantic diversity. While datasets based on real images are able to measure task-specific metrics, we provide an avenue for systematically testing and measuring the broader compositional generalization limits of future methods.
Summary: This work investigates whether analogical reasoning can enable in-context composition over composable elements of visual stimuli. First, they introduce a suite of three benchmarks to test the generalization properties of a visual in-context learner. They formalize the notion of an analogy-based in-context learner and use it to design a meta-learning framework called Im-Promptu. To this end, they use Im-Promptu to train multiple agents with different levels of compositionality, including vector representations, patch representations, and object slots. Lastly, they demonstrate a use case of Im-Promptu as an intuitive programming interface for image generation. Strengths: In-context learning is a novel learning paradigm recently. While in-context learning is popular with large language models (LLM), it has not been discovered in computer vision fields. This paper provides three benchmarks for in-context learning for computer vision and introduces some algorithms for proposed benchmarks. Since in-context learning is open for computer vision, the attempt is valuable and makes further research possible. Weaknesses: In-context learning is a general learning paradigm for diverse tasks. In the computer vision field, there are a lot of tasks (e.g., classification, object detection, segmentation). However, this paper only considers three tasks: 3D Shapes Image Prompts, BitMoji Faces Image Prompts, CLEVr Objects Image Prompts. Thus, this benchmark may be not sufficiently comprehensive from this perspective. On the other hand, the performance versus the number of in-context examples is one important aspect of in-context learning. This paper doesn't provide any insight from this view. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I have the following questions: 1. Since the number of in-context demonstrations is one important parameter in in-context learning fields, is there any ablation study for the number of demonstrations? 2. The input and output of these three tasks are all images. One question is can the text be the output of the task (e.g., image classification)? 3. In the proposed method, there are some components involved. What's the difference if we use models from scratch instead of pre-trained? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Compared with in-context learning in language models, this paper only considers three proposed computer vision tasks. Thus the setting of in-context learning may be limited. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and positive feedback. We address the various concerns below. > Since the number of in-context demonstrations is one important parameter in in-context learning fields, is there any ablation study for the number of demonstrations? We agree with the reviewer that the number of context examples is a critical factor to in-context learning ability. We have, in fact, run ablations on the number of examples by exploring k-shot variants of the models. The plots in Figure 4 and Figure 5 show the model performance against the number of examples under “k-shot” labels. In addition, observations on the effect of context size have been described in R1.3. We find the performance of the sequential prompter and patch-based agent to significantly improve with additional examples. > The input and output of these three tasks are all images. One question is can the text be the output of the task (e.g., image classification)? We think this is possible by jointly modeling slots and language tokens and doing the context aggregation over the joint space. > In the proposed method, there are some components involved. What's the difference if we use models from scratch instead of pre-trained? As such, we found that symmetry breaking among slots converged extremely slowly when trained from scratch and was sensitive to hyperparameters. We found it helpful to pre-train the encoder and bias the slots to capture object abstractions before learning to compose. --- Rebuttal Comment 1.1: Title: Post-rebuttal Comment: After reading other reviewers' reviews and authors' rebuttals, I decide to keep my positive score. Even though the setting in the paper is not comprehensive, I still think in-context learning is an interesting topic in computer vision and this paper can provide benchmarks and some insights for the community.
Rebuttal 1: Rebuttal: We would like to thank all reviewers for raising a number of great questions, adding detailed comments, and drawing our attention to related works. We will incorporate the feedback in the next iteration of the manuscript. We address some common concerns below and introduce additional results. ### Concern surrounding the absence of natural images Formulating a method that can solve in-context downstream computer vision tasks is a worthwhile goal, but the scope of this work lies in understanding the computational underpinnings of compositional generalization. Uncovering the process by which humans form and modify flexible representations from few-shot examples is key to abstract reasoning and visual understanding [1][2]. While training large models on natural images is an appealing avenue for studying the emergence of large-scale in-context learning, this by itself tells us very little about the acquisition and the generalization limits of composition skills. Therefore, in this work, we construct synthetic benchmarks with composable attributes to systematically find effective inductive biases for compositional generalization. We believe that the key to scaling up our models lies in designing natural image benchmarks where compositional attributes of the visual stimuli are modified. Generating compositional stimuli for real images is a significant engineering effort, and we leave this to future work. ### Additional Inpainting Results We thank Reviewer 4dpy for pointing us to recent work [3] on visual prompting through in-painting. The authors represent an analogy via an image grid and use inpainting to fill in the missing solution. We implemented this methodology to investigate its ability to acquire composition skills. The results are as follows: **Primitive Task Extrapolation**: | | OCL-1 shot MSE | In-Painting MSE | OCL-1 shot FID | In-Painting FID | | --------- | ----------------- | --------------- | ----------------- | --------------- | | Shapes 3D | 4.72 ∓ 0.46 | 142.77 ∓ 1.31 | 34.77 ∓ 0.06 | 31.20 ∓ 1.86 | | BitMoji | 6.91 ∓ 0.54 | 260.94 ∓ 8.27 | 8.96 ∓ 0.05 | 104.84 ∓ 12.23 | | CLEVr | 37.99 ∓ 1.91 | 200.64 ∓ 3.75 | 38.99 ∓ 0.57 | 186.34 ∓ 5.33 | **Composite Task Extrapolation**: | | OCL-1 shot MSE | In-Painting MSE | OCL-1 shot FID | In-Painting FID | | --------------------------- | ------------ | ------------------ | ------------ | ------------------ | | Shapes 3D - Two Composite | 6.22 ∓ 2.76 | 194.6 ∓ 1.56 | 34.18 ∓ 0.02 | 31.82 ∓ 2.45 | | Shapes 3D - Three Composite | 7.96 ∓ 2.93 | 268.6 ∓ 0.92 | 35.12 ∓ 0.05 | 41.84 ∓ 1.51 | | Shapes 3D - Four Composite | 12.62 ∓ 4.41 | 292.84 ∓ 1.12 | 36.82 ∓ 0.06 | 51.34 ∓ 1.97 | | BitMoji - Two Composite | 5.72 ∓ 0.18 | 284.9 ∓ 7.35 | 8.78 | 103.74 ∓ 10.54 | | BitMoji - Three Composite | 5.96 ∓ 0.23 | 311.1 ∓ 6.48 | 8.74 | 108.92 ∓ 9.71 | | CLEVr - Two Composite | 37.12 ∓ 1.16 | 235.66 ∓ 2.77 | 68.45 ∓ 2.32 | 191.72 ∓ 3.85 | | CLEVr - Three Composite | 62.26 ∓ 1.41 | 285.76 ∓ 2.43 | 58.5 ∓ 1.02 | 183.94 ∓ 4.27 | Examples of the generated outputs have been shown in the attached pdf. We observe that in-painting can extend composition rules to in-distribution primitives but produces incoherent outputs when tested on both primitive rule extrapolation (Sec. 6.2) and composite rule extrapolation (Sec. 6.3). It performs the worst out of all the agents. The technical details, results, and code for the In-Painting will be added to the updated manuscript. [1] Brenden Lake et al., “Building machines that learn and think like people,” Behavioral and Brain Sciences, 2016 [2] Brenden Lake et al., “One shot learning of simple visual concepts,” Cognitive Science, 2011 [3] Amir Bar et al., "Visual prompting via image inpainting," Advances in Neural Information Processing Systems 35 (2022): 25005-25017 Pdf: /pdf/70be9b4e4717f83dbc3863b776dbe9f0221decf2.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper investigates whether analogical reasoning can enable in-context compositional generalization over visual entities. The authors construct three benchmarks to test the compositional generalization on visual analogy-making, including 3D shapes, BitMoji Faces, and CLEVR. The authors also present a visual analogy-making algorithm called Im-Promptu. The authors test the proposed method on the constructed benchmarks with various visual representations, including vector representations, patch representations, and object slots. The experiments demonstrate the tradeoffs between extrapolation abilities and the compositionality degree of the visual representations. Strengths: 1. The paper constructs three benchmarks to test the compositional generalization on visual analogy-making, including 3D shapes, BitMoji Faces, and CLEVR. 2. The paper conducts experiments using various visual representations, including vector representations, patch representations, and object slots, and provides insights into the impact of the compositionality degree of the visual representations on compositional generalization. The results demonstrate the effectiveness of the object-centric representation for the compositional visual analogy-making. Weaknesses: 1. The technical novelty of the proposed framework (Im-Promptu) is limited. The formulation in Section 4 is similar to visual analogy-making [1,2]. It is inappropriate to rename it as in-context learning or Im-Promptu learning, if there is no fundamental difference. Section 5 lists several model variants for visual analogy-making by composing existing modules. It is unclear what is the paper's contribution to the methodology. 2. The constructed benchmarks are all synthetic domains with little variance. This is a weakness considering that previous works [2,3] have already studied similar tasks using natural images. 3. The experimental results are inadequate. The authors only evaluated the proposed framework on the self-constructed benchmarks, while previous works [1,2,3] have introduced several benchmarks for visual analogy-making. It is unclear whether the proposed framework generalizes to other realistic benchmarks. 4. The paper lacks reference to important related works [2,3]. [1] Reed, Scott E., et al. "Deep visual analogy-making." Advances in neural information processing systems 28 (2015). [2] Sadeghi, Fereshteh, C. Lawrence Zitnick, and Ali Farhadi. "Visalogy: Answering visual analogy questions." Advances in Neural Information Processing Systems 28 (2015). [3] Bar, Amir, et al. "Visual prompting via image inpainting." Advances in Neural Information Processing Systems 35 (2022): 25005-25017. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. What is the fundamental difference between the proposed Im-Promptu learning in Section 4 and visual analogy-making? 2. What is the main technical contribution of this paper? 3. The method of this paper works better on spatially consistent datasets, so how to reflect the statement "produces a more generalized composition beyond spatial relations" in the paper? 4. Why are there some results with lower MSE and higher FID? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 1 poor Limitations: The authors do not discuss the limitations of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and constructive feedback. We respond to the questions and concerns below. > What is the fundamental difference between the proposed Im-Promptu learning in Section 4 and visual analogy-making? We agree that our work bears similarity to visual analogy-making in the sense of being able to complete an analogy, having observed a transformation over context examples. However, our work is interested in asking the broader question of what enables the in-context compositional generation over composable attributes in the visual domain. Such a compositional learning ability has been well investigated for the language modality. This question can then be disentangled into the interplay of three sub-questions - (1) What composable abstractions can help learn implicit composition? (2) Which objective optimizes learning such rules? (3) How to model the composition from entities? In this context, visual analogy as an auto-encoding reconstruction only answers the second question by formulating a reliable optimization objective via analogy solving for learning transformations. Im-Promptu, on the other hand, supports broader compositional learning by unifying these components under a common mechanistic framework (See Equations 1 and 3). >The technical novelty of the proposed framework (Im-Promptu) is limited. The formulation in Section 4 is similar to visual analogy-making [1,2,3]. It is inappropriate to rename it as in-context learning or Im-Promptu learning if there is no fundamental difference. First, Ref. [2] focuses on solving intelligence test types of analogies where the solver has to classify the correct answer from a bunch of different options, and as such, doesn't support generation. Ref. [1] uses visual analogy as a proxy to learn input-invariant transformation vectors that can traverse continuous manifolds (e.g., geometric transformations, animation frames) via simple rules. The authors demonstrate limited generalization to new inputs but with latent transformations within the training distribution. In fact, our monolithic agent is directly instantiated from their MLP formulation (See Equation 5.3 in Ref. [1]). Our experiments reinforce the findings of [1] and demonstrate the ability of such representations to generalize to single-order transformations (Sec. 6.2) at test time. However, these don't extend to a higher-order compositional generation where multiple composite attributes are modified and therefore don’t support higher-order visual understanding (Sec. 6.3). We would like to thank the reviewers for pointing us toward the in-painting technique used in [3]. We have now implemented the technique with the slight modification of using an MAE -VQVAE ensemble (instead of MAE-VQGAN). The pretraining involves masked auto-encoder (MAE) reconstruction on 2x2 analogy grids described in [3]. See the global response section for exact quantitative and qualitative results. Even though past works have found in-painting as a simple and effective self-supervision task for learning general-purpose representations, we find that it cannot generate compositions beyond the training distribution (see attached pdf for generated outputs). > Section 5 lists several model variants for visual analogy-making by composing existing modules. It is unclear what is the paper's contribution to the methodology. We believe that our work takes a crucial step towards solving visual analogies that can understand and compose higher-order relations than the ones they were trained on, a notion largely unexplored in visual analogy models. To this end, we make several key technical contributions: 1. Benchmarks for the systematic understanding of in-context learning in the visual domain. This is important to understand the underpinnings of in-context learning, the emergence of which is less than completely understood in the language domain. 2. Extensive exploration of model abstractions across the compositionality continuum. While we use existing modules to learn compositional abstractions, it is not a priori straightforward that strong compositional abstractions should also support in-context compositional generation. Ours is the first work that utilizes these modules with a range of context aggregation techniques to understand the implications of various design choices on generalization. 3. From a more practical standpoint, we provide empirical training recipes for OCL that could be utilized to scale it while doing in-context learning over real data. > The constructed benchmarks are all synthetic domains with little variance. This is a weakness considering that previous works [2,3] have already studied similar tasks using natural images. It is unclear whether the proposed framework generalizes to other realistic benchmarks Please see the global response section under the heading “Concern surrounding the absence of natural images”. While other works [3] that use realistic benchmarks show decent performance across task-specific metrics, the degree of generalization that can be obtained via in-painting is, as such, unmeasurable. In fact, our experiments show that the techniques of both Refs. [1] and [3] are not amenable to compositional generalization. > The method of this paper works better on spatially consistent datasets, so how to reflect the statement "produces a more generalized composition beyond spatial relations" in the paper? Here we refer to the aforementioned papers' limited ability to generalize to spatial “positional” relations by separating the position of an object from its content. Our method, in contrast, can be used to simultaneously modify the geometry of the scene with other compositional attributes. We apologize for the confusion. > Why are there some results with lower MSE and higher FID? This is possible if the model generation gets the larger structure of the output right but poorly captures finer aspects like shadows and edge transitions. --- Rebuttal Comment 1.1: Comment: Thank the authors for the response and additional results. While the rebuttal address part of my concerns, I am still not convinced that the so-called Im-Promptu learning is fundamentally different from visual analogy-making. Besides, the proposed method is of limited novelty, and the proposed benchmarks are synthetic and limited. Therefore, I keep my rating towards rejection.
null
null
null
null
null
null
Fairness Aware Counterfactuals for Subgroups
Accept (poster)
Summary: The authors identify various aspects of fairness that require consideration when assessing recourse bias between subgroups. They propose FACTS (Fairness Aware Counterfactuals for Subgroups), in an attempt to lay foundations for a framework that can be used to audit subgroup fairness through counterfactual explanations (CEs). Experimental evaluation is relatively well thought out and thorough. Strengths: 1. The paper is easy to read, with a logical structure and natural flow. In most cases, relevant examples are provided where necessary. 2. I believe the issue being addressed is complex, and hard conclusions regarding the effectiveness of fairness tools should be made sparingly. However, the paper does an excellent job of analysing the problem at a deeper level than that which currently exists in the literature [1]. Explicit separation between the micro and macro viewpoint is well positioned. 3. While there are still gaps to be filled, this paper lays a strong foundation for future research into the use of counterfactuals in assessing recourse bias/unfairness. It is worth noting that [2] conducts similar analyses of the potential pitfalls of bias assessment via subgroup comparison, arriving at a similar conclusion to this paper that recourses should be compared 1 to 1 for more reliable results. [1] Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. ICML 2018. [2] Dan Ley, Saumitra Mishra, Daniele Magazzeni. GLOBE-CE: A Translation-Based Approach for Global Counterfactual Explanations. ICML 2023. Weaknesses: I appreciate the thoroughness of the authors in tackling an issue as complex as this. There are a few important gaps that I believe may have evaded their attention, which I will detail below. 1. The assumption that while cost is not exactly quantifiable, a given action's cost is uniform across the input space, and can thus be compared between instances, does not always hold. One simple example would be changing salary by amount X. Individuals commanding higher initial salaries are likely to find this action easier. I would thus modify the "oblivious to the cost function" claim appropriately (though it is still a more than reasonable starting point for now). 2. Evaluation of fairness appears quite sensitive to the set of actions A chosen (specifically the fixed cost used/described in lines 290-302). Such a budget can often handle poorly the asymmetries between subgroup cost distributions for a given action. That said, multiple cost budgets are evaluated in the experiments, to provide a more complete overview. It's also worth considering the idea of scaling the cost of certain action directions, as in [2], and evaluating fairness accordingly. 3. In a similar vein, frequent itemsets are prone to resolution issues with numerical features. Moving beyond the apriori approach in [3] is wise, though frequent itemsets often do not uncover the minimal cost directions to flip predictions. Additionally, if the motivation behind fp-growth is primarily efficiency, it would be useful to have some analysis of this. 4. Discussion of when and how the fairness metrics can be manipulated (as above) is needed. Additionally, in practical settings, individuals are each in control of diverse actions, often unique to them. The effect of such actions (not included in but confounding with the feature space) is worth pointing out. 5. A large quantity of metrics are introduced. Table 1 caption could do with more clarification so readers don't have to study the text to understand the highlighted values. [3] Kaivalya Rawal and Himabindu Lakkaraju. Beyond individualized recourse: Interpretable and interactive summaries of actionable recourses. NeurIPS 2020. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The bins chosen for numerical features are rather large, and so may not effectively capture the minimum cost recourses between subgroups if the numerical features are influential, which might commonly be the case. This is one of the issues with frequent itemset mining, and something that [3] originally suffered with- do the authors have any proposed solutions? From my experience in the area, this is a fairly serious shortcoming, and something that prevented me from awarding this paper an even higher score. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: Described above. Limitations associated with choosing parameters for fairness metrics and determining appropriate action spaces should be discussed in the revision. The research contributes valuable insights and provides a robust foundation for future work, though I would suggest to keep in mind the inherent complexity of the fairness problem, which is indeed emphasised in the paper. Overconfidence in the reliability of the proposed framework may not encourage an appropriately critical approach to this multifaceted issue i.e. conclusions drawn should be cautiously optimistic and mindful of the above limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1 "The assumption that while cost is not exactly quantifiable, a given action's cost is uniform across the input space, and can thus be compared between instances, does not always hold. One simple example would be changing salary by amount X. Individuals commanding higher initial salaries are likely to find this action easier."** We thank the reviewer for raising this point, which only concerns the cost-oblivious fairness metrics. There, we will make this assumption explicit: the cost of a given action $a$ (e.g., increase salary by amount $\delta$) is the same for all individuals $x \neq x'$, i.e., $cost(a, x) = cost(a, x')$. When this is not the case, we can think of two alternatives that are straightforward to implement in FACTS: - We can keep the subpopulation bins for that attribute small (or adjust them accordingly) so that the assumption approximately holds for each subpopulation. - Adapt the definition of actions so as to satisfy the assumption. E.g., for salary, we consider actions that increase salary by $\gamma$%. **W2 "Evaluation of fairness appears quite sensitive to the set of actions A chosen (specifically the fixed cost used/described in lines 290-302). Such a budget can often handle poorly the asymmetries between subgroup cost distributions for a given action. That said, multiple cost budgets are evaluated in the experiments, to provide a more complete overview. It's also worth considering the idea of scaling the cost of certain action directions, as in [A], and evaluating fairness accordingly"** We emphasize that our goal is to accurately capture the real effectiveness-cost distribution (ecd) within a subpopulation. We achieve this by considering the cost and effectiveness of all valid actions among a set $A$. Augmenting this set with additional actions can help improve the accuracy of the ecd. As suggested, for each action $a \in A$ we may also consider other actions derived from $a$ by scaling $a$'s numerical part; e.g., if $a$ says change income from [40K,50K] to [50K,60K], i.e., add 10K, we can also consider actions that change the income by 5K, 10K, 15K, for example. This can also be achieved by applying a different numerical binning for the actions as we discuss in the author rebuttal. Another approach would be to sample the space of possible actions uniformly at random. **W3 "In a similar vein, frequent itemsets are prone to resolution issues with numerical features. Moving beyond the apriori approach in [3] is wise, though frequent itemsets often do not uncover the minimal cost directions to flip predictions."** We want to emphasize that all model-agnostic methods that explore a finite set of actions $A$ have an inherent limitation to how well they can discover the minimum recourse cost. Nonetheless, as discussed in the previous comment, we can foresee several ways to augment the set $A$ so as to most accurately represent the ecd. **W3 "Additionally, if the motivation behind fp-growth is primarily efficiency, it would be useful to have some analysis of this."** Tables 1-4 in the global response show the runtime of our fp-growth-based generation of subpopulations and actions, and also provides some scalability results. We note however that our main contribution is a conceptual one, i.e., how to formalize notions of recourse unfairness, rather that a technical one, i.e., how to efficiently determine a good set of actions to explore. **W4 "Discussion of when and how the fairness metrics can be manipulated (as above) is needed. Additionally, in practical settings, individuals are each in control of diverse actions, often unique to them. The effect of such actions (not included in but confounding with the feature space) is worth pointing out."** We thank the reviewer for raising these issues; we intend to discuss them in greater detail in a limitation section. The manipulability of the method and the robustness of the conclusions drawn is an important issue. We note that while local/individual counterfactuals have been found to be prone to manipulability, our fairness auditing method is more robust as it considers multiple counterfactuals for the same individual and draws conclusions at a subpopulation level. Clearly, more research in this direction is called for. Regarding the last observation, this boils down to how well the cost function models the world; e.g., it could capture causal structures in the domain, and personalized/unique actions/costs. **Q1 "The bins chosen for numerical features are rather large, and so may not effectively capture the minimum cost recourses between subgroups if the numerical features are influential, which might commonly be the case. This is one of the issues with frequent itemset mining, and something that [30] originally suffered with- do the authors have any proposed solutions? From my experience in the area, this is a fairly serious shortcoming, and something that prevented me from awarding this paper an even higher score."** We note the distinction between binning for subpopulations and binning for actions, which can be done separately as discussed in the global response, and has the potential to define a more fine-grained action set $A$. Additional ways to augment or to define a better set $A$ without relying on frequent itemset mining are also outlined in the response to **W2**, but are somewhat orthogonal to our main contribution. Certainly, the implementation of our auditing framework can benefit from recent advances in the field like [A]. --- Rebuttal Comment 1.1: Title: Response to authors Comment: I would like to thank the authors for their time spent in addressing my concerns. While the response is mostly satisfying, I maintain my score of 7 and vote for acceptance of this work.
Summary: The paper presents a framework called FACTS (Fairness Aware Counterfactuals for Subgroups) for auditing subgroup fairness through counterfactual explanations. The authors aim to formulate different aspects of the difficulty individuals face in achieving recourse, either at the micro level (individuals within subgroups) or at the macro level (subgroups as a whole). They introduce new notions of subgroup fairness that are robust and provide an efficient, model-agnostic, and explainable framework for evaluating subgroup fairness. The authors demonstrate the advantages and wide applicability of their approach through an experimental evaluation using benchmark datasets. Strengths: + The paper addresses an important and timely topic in machine learning, namely fairness in decision-making processes. + The authors propose a novel framework, FACTS, for auditing subgroup fairness, which is model-agnostic and highly parameterizable. + The paper provides a thorough explanation of the different notions of subgroup fairness and their implications. + The experimental evaluation demonstrates the effectiveness and efficiency of the proposed approach on benchmark datasets. + The paper is well written and organized. Weaknesses: There are a few weaknesses: - The paper could benefit from a more detailed description of the methodology used in the experimental evaluation. While the authors have provided an overview of the experimental setup and the data collection process, a more comprehensive explanation of the specific steps taken would greatly enhance the clarity and reproducibility of the study. For example, providing information about the specific criteria used to select the sources or the process of extracting and preprocessing the data would be useful for identifying the potential data generating bias. - In addition, providing more information on the statistical analysis performed on the experimental results would enhance the credibility of the findings. The authors mainly use Comparative Subgroup Counterfactuals for evaluating. However, it remains unclear whether the observations are significant, and how do they generalize to high-dimensional data as well. - The framework may face challenges in cases where the protected attributes are not well-defined or are subject to interpretation. How does FACTS perform when the underlying sensitive attributes are unknown (which is esp. true for many applications)? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the weakness section for main questions. In addition to points raised above, there are a few questions remained: - How does the proposed framework handle the issue of defining a cost function for recourse? - Are there any computational or scalability limitations of the FACTS framework? - How does the performance of the FACTS framework compare to other state-of-the-art approaches for auditing subgroup fairness? - The proposed framework relies on the assumption that counterfactual explanations can accurately capture the underlying causes of bias and provide actionable insights. - The effectiveness of the FACTS framework may vary depending on the specific dataset and domain. - The framework may face challenges in cases where the protected attributes are not well-defined or are subject to interpretation. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The main paper does not have a limitation statement, although the authors mentioned a few in the conclusion. There's no broader societal impact statement about the potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1 "more detailed description of the methodology used"** We acknowledge that some details about the experimental methodology description are missing in the main text, and probably not adequately covered in the supplementary material. We intend to remedy this should the paper be accepted. We should note however, that standard preprocessing and train-test practices were applied on all datasets, which are mostly widely used datasets for benchmarking fairness methods. Finally, we note that we have included more detailed information for the additional experiments we have conducted during this rebuttal phase, and which we present in the pdf attached to the author rebuttal. **W2 "more information on the statistical analysis performed on the experimental results" "whether the observations are significant, and how do they generalize to high-dimensional data"** We note that the experimental results are derived by applying the steps outlined in Section 3 and then applying the definitions of Section 2.3 in a straightforward manner. There is no statistical post-processing to include or exclude data points. High-dimensional data may lead to subgroups with few individuals. Please refer to our response to comment *Q3* of *Reviewer fCWq* on significance. **W3 "How does FACTS perform when the underlying sensitive attributes are unknown?"** Similar to fairness literature (see Fliptest, Fairtest, AReS, Globe-CE), the problem formulation that FACTS adopts is that the protected attributes are known in advance. Techniques that discover protected attributes is a very interesting but orthogonal research direction; i.e., such techniques can be applied prior to running FACTS. Nonetheless, we wish to draw attention to two important aspects of FACTS. First, FACTS can investigate whether recourse bias manifests in intersectional subgroups, e.g. black men of certain age groups, despite being absent when comparing all black with all white men, for example. Second, FACTS is not restricted to apply solely on protected attributes. Any attribute (and even a multiclass/multivalued attribute) can be treated as "protected" and given as input to FACTS. FACTS then would assess any disparities in recourse across subpopulations defined by the selected attribute. **Q1 "How does the proposed framework handle the issue of defining a cost function for recourse?"** We consider the definition of cost as an orthogonal, highly application/domain dependent and task, which is out of scope of our work; we refer the reader to the discussion in [30,34] and [A] about the challenges in defining cost functions. Our framework is built so that it can support arbitrarily defined cost functions on top of the available attributes. Upon this, our framework utilizes the produced recourse scores in different ways (or not at all) depending on the selected definition. In particular though, since we recognize the difficulty of defining costs, two of our definitions actually are cost-oblivious, i.e., they rely on measuring effectiveness of actions and not their costs (making the assumption that the same action will have the same cost for all individuals of the examined subgroup). We note that in our experiments, we select a straightforward cost configuration that is used solely to demonstrate the results of our method, as described in "Experimental Setting" of the supplementary material. **Q2 "Are there any computational or scalability limitations?"** We note that as reported in the pdf attached to the global rebuttal, the runtime of FACTS is in the order of a few minutes for the datasets examined. **Q3 "How does the performance of the FACTS framework compare to other state-of-the-art approaches for auditing subgroup fairness?"** In our work, we audit for a specific type of algorithmic fairness, fairness of recourse. All methods of the literature (e.g. Fliptest, Fairtest, "Preventing Fairness Gerrymandering") audit for predictive fairness (like equal opportunity), and thus are not comparable. Note that mean cost of recourse or "burden" [20,31,36] is the only known fairness of recourse metric in the literature. We also note that AReS [30] and follow-up work are global explainability methods, which can be used for auditing for the burden metric. Such methods however are not designed with auditing as their main goal. In particular, AReS only provides a toy example with qualitative results of how their framework can be used for fairness auditing and states that: "It is thus important to be cognizant of the fact that AReS is finally an explainable algorithm (as opposed to being a fairness technique) that is meant to guide decision makers." Experiments with real datasets, presented in Tables 1, 6, 7, 8, 9 and also in Tables 11, 13, 15, 17, 19 demonstrate that burden fails to uncover other forms of recourse bias. **Q4 "the assumption that counterfactual explanations can accurately capture the underlying causes of bias and provide actionable insights."** Indeed we operate on this assumption. We believe that algorithmic recourse (via counterfactual explanations) is an important aspect of the behavior of a model that captures how difficult it is for an individual to have agency over algorithmic decisions that concern them. In this sense, fairness of recourse mandates that individuals are not discriminated against in their capacity to receive desirable outcomes. **Q5 "The effectiveness of the FACTS framework may vary depending on the specific dataset and domain."** We note that we have evaluated our method on four widely-used datasets in the fairness literature and we report interesting/meaningful findings for all of them (in the main text and in the supplementary material). We also show that the various fairness of recourse definitions we have introduced are quite distinct notions, and it is thus up to auditors and domain experts to decide how best to apply the FACTS framework. --- Rebuttal Comment 1.1: Title: Thank you for your rebuttal Comment: I thank the authors for their efforts in providing a thorough rebuttal. The response addressed most of my concerns. For now, I tend to maintain my score and I look forward to the discussion with other reviewers.
Summary: This paper considers the problem of fairness of machine learning based decisions. The setting is as follows: there is a set of features X in R^n where X_n denotes the demographic group, and a classifier h: X to {0,1} and we have access to a dataset D of individuals with h(X)=0. Each individual can take actions in an action space A to potentially change their classification decision to h(a(X))=1: this is recourse, and a is the counterfactual explanation. The paper considers how different actions to achieve recourse can be fair on either a micro or macro level. They first define multiple notions for effectiveness of actions to achieve recourse, that depend on a group, a cost budget or an effectiveness level. They then define and recall multiple definitions to define fairness of recourse (6 such definitions in section 2.3). They propose an algorithm "FACTS" that enables one to check for subgroups (based on features X) where there exist unfairness based on the 6 definitions and allows one to find the interventions to achieve needed effectiveness rates. They evaluate and showcase their algorithm on four different datasets. Strengths: Originality: I think the way this paper thinks about recourse is novel in terms of the trade offs between effectiveness, cost and subgroups. Furthermore their algorithm is novel and intuitive way to find violations of fairness of recourse constraints. Quality: the experimental results seem sound (code is provided) and definitions are well discussed. Clarity: I enjoyed reading this paper, in particular, the introduction. However, there are some minor changes that can improve the exposition. Significance: I think the FACTS tool can serve as a nice way to audit algorithms for fairness of recourse. Moreover, the definitions in section 2.3 and the exposition in the introduction is very insightful for the community in algorithmic recourse. Weaknesses: - there are two limitations of the algorithm: the form of the predicates (subgroups) and the form of the actions. In particular the subgroups are conjunctions of multiple features (does not allow arbitrary subgroups) and second actions are also conjunctions of feature values. This will lead to finding too many subgroups and too many actions, that could have instead been grouped under one subgroup or one action. Second, the algorithm only applies to categorical features and actions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - can you address the point in the weaknesses section? - does the algorithm generalize to multiclass demographic groups and multiclass outcomes? - how can one find recourse for continuous features based on the provided algorithm? - the current algorithm lacks statistical tests to check if the subgroup found and the violations are statistically significant, how can one ensure that the violations found are in fact significant? - what is the runtime of the algorithm and how does it scale when features have very large spaces (i.e. suppose feature X_2 can take 10000 values) comments: - I really liked the introduction, but best to separate related work from the introduction. I would also encourage the authors to make the intro more succinct to avoid repeating the explanations in later sections. - the figures to show the recourse are not easy to read or attractive, I would suggest the authors spend some time improving the design of the CSC (figure 2 and figure 3) - sections 2.2 and 2.3 now read as a list of definitions without much continuity or story between the definitions and constraints. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W-i "There are two limitations of the algorithm: the form of the predicates (subgroups) and the form of the actions. In particular the subgroups are conjunctions of multiple features (does not allow arbitrary subgroups) and second actions are also conjunctions of feature values. This will lead to finding too many subgroups and too many actions, that could have instead been grouped under one subgroup or one action."** We argue that conjunctions of multiple features is the most natural and interpretable way to define and refer to subpopulations. Investigating algorithmic fairness for clearly defined and understood subpopulations is very important. In contrast, arbitrary definitions of subgroups is also susceptible to gerrymandering, where subgroups may be maliciously defined so as to hide unfairness. Generating many actions is actually desirable. When we audit for fairness, we consider all of the generated actions, aggregate their effectiveness and cost, and define the effectiveness-cost distribution on which all our fairness metrics are based. **W-ii "Second, the algorithm only applies to categorical features and actions."** This is not true. There seems to be a misconception about how FACTS handles numerical attributes, which we try to clarify in the author rebuttal. Please refer to the section on *Numerical Attributes and Binning*. **Q1 "Does the algorithm generalize to multiclass demographic groups and multiclass outcomes"** The generalization to multiple protected groups (multiclass demographic groups) is straightforward. For example, we can simply compare, say the burden, of each protected group to all others, e.g., one-vs-all. An example of this approach for the Adult dataset and the multi-values race attribute is presented in Figure 3 in the pdf attached to the author response. Regarding multiclass outcomes, we note that the notion of recourse (and by extension the notion of fairness of recourse) assumes there is a favorable and an unfavorable class, i.e. implies a binary classification setting. This also means that these notions can transfer to a multiclass setting when one of the classes can be considered as the favorable and all others as unfavorable; we note however that we have not seen such a case in the literature of algorithmic recourse. **Q2 "How can one find recourse for continuous features based on the provided algorithm?"** Please refer to the global rebuttal for a detailed explanation. Briefly the idea is to perform binning on the continuous features and define actions based on these bins. **Q3 "The current algorithm lacks statistical tests to check if the subgroup found and the violations are statistically significant, how can one ensure that the violations found are in fact significant?"** Regarding the statistical significance of a subgroup/subpopulation, we note that this task should be application and context specific, as when a subgroup should be considered representative might differ in different applications, and even in the same application when different policies are followed. Our framework enables a simple parameterization that lets the auditor decide the sizes of the subpopulations examined (set to 1% of the dataset size in our experiments). Regarding the statistical significance of fairness violations, we note that for some fairness metrics it is easy to compute the statistical significance. For example, for mean recourse cost, we can simply use the t-test to assess how significant the difference between the means of the two sub-populations is; for fair effectiveness-cost trade-off, we can report the result of the Kolmogorov-Smirnov test, as discussed in the main text. Nonetheless, we argue that the *importance* of any fairness violation should be best judged by an expert auditor with domain knowledge and assisted by tools such as FACTS. Note that FACTS depicts alongside the unfairness metric (e.g., difference of mean recourse cost), the sub-population sizes (the coverage percentages depicted as blue in Figs. 2 and 3 in the main text). **Q4 "What is the runtime of the algorithm and how does it scale when features have very large spaces (i.e. suppose feature X_2 can take 10000 values)"** Tables 1 and 3 in the pdf attached to the author rebuttal report the runtime of our algorithm for all datasets, and also includes results while varying the number of bins in the numerical attributes. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for your rebuttal! Raised my score to accept, I recommend accepting this paper.
Summary: In this paper, the authors explore the fairness of recourse in detail and distinguish between the micro and macro viewpoints. Moreover, they propose an efficient, interpretable, model-agnostic, highly parameterizable framework, called FACTS, to audit for fairness of recourse and provide an interpretable summary of its finding. Strengths: - The authors claim that their work is the first that audits for fairness of recourse at the subpopulation level. - The results of their work are explainable and interpretable, which makes it more practical and easier to take action in order to mitigate the bias for different subgroups. - The authors provide a thorough analysis of various fairness metrics and their advantages and limitations to motivate their work. They also define various subgroup recourse fairness metrics and produce separate subgroup rankings per definition. Weaknesses: - The experiments section could be improved by including more datasets and comparisons with other fairness metrics and definitions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - In general, how realistic is it to be oblivious to the cost function and have an equal choice for recourse? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - The experimental section provides limited insight as it only contains one dataset. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W "The experiments section could be improved by including more datasets and comparisons with other fairness metrics and definitions."** We would like to point out that due to lack of space in the main text, we have only included a single dataset Adult with sex as the protected attribute. In the supplementary material, we have additional experiments using Adult (race is protected), SSL, COMPAS, and Ad campaign datasets. Moreover, in the pdf attached to the author rebuttal, we include results on German credit. Since fairness of recourse is a quite recent notion of algorithmic fairness, there exist no other fairness metrics beside the average recourse cost or "burden". In results shown in Table 1, and in additional results in the supplementary material, we showcase that the various fairness metrics we introduce capture distinct interpretations of recourse fairness. **Q "In general, how realistic is it to be oblivious to the cost function and have an equal choice for recourse?"** Equal choice for recourse is one of our proposed definitions of recourse fairness and may find application in some settings. To recall, it says that if you have a set of actions such that each action is considered equal in terms of cost (this can be when actions have equal or comparable cost, or when cost is not an issue or irrelevant or cannot be well quantified) you can be *cost oblivious* and define fairness just in terms of *effectiveness* with respect to this set of actions. A cost-oblivious setting where equal choice for recourse makes sense can be seen in Figure 1a in our main text. Suppose that a company makes available to its employees two training programs. Action $a_1$ refers to a training program to enhance productivity skills (and thus affect the CTE aspect of individuals), while action $a_2$ refers to a similar training program to enhance project acquisition skills (and thus affect the ACV aspect of individuals). It can be argued that the cost of actions $a_1$ and $a_2$ is irrelevant and that we can be cost oblivious for them. Equal choice for recourse would investigate if individuals from race 0 and race 1 can equally benefit from these actions. --- Rebuttal Comment 1.1: Comment: Thanks for the explanations. I appreciate the authors' efforts to clarify the concerns.
Rebuttal 1: Rebuttal: We are thankful to the reviewers for their insightful and constructive comments. In this global response, we would like to address some misunderstandings about the focus of our work and how we handle numerical attributes, and also discuss limitations. ### Global Explainability vs Auditing for Fairness FACTS is a method to audit a model for fairness of recourse in subpopulations. AReS [30, 24] and GLOBE-CE [A] are methods to generate summaries for global counterfactual-based explanations. As such the papers and methods have distinct focus and objectives. Please note that GLOBE-CE appears in ICML 2023 and was first published on May 26th in arXiv, *after* the NeurIPS 2023 submission deadline of May 8th. AReS and GLOBE-CE consider global counterfactual explanations (GCEs); a GCE is an explanation that applies to a group of instances collectively (akin to our macro viewpoint). Their methods differ in the GCE definitions. A GCE in AReS is a translation in the feature space (akin to an action in our terminology). A GCE in GLOBE-CE is a direction in the feature space along which instances can achieve recourse with differing translation magnitudes. Nonetheless, AReS and GLOBE-CE have the same goal: find a small set of GCEs to "best" explain a group of instances (according to some objectives). The motivation is to construct an accurate, easily understandable *summary* of counterfactuals; the main contribution of GLOBE-CE is that it constructs a more accurate summary than AReS. Note also the analogy with local explainability, where the goal is to find the best (typically the nearest) counterfactual for a single instance. In contrast, auditing for fairness is *not* an optimization problem. We do not generate counterfactuals and then select a *few* good among them. In fact, we take the opposite approach: we generate as many counterfactuals as we can afford to, and use *all* of them to approximate the effectiveness-cost distribution. Global explainability methods can be used to audit for recourse fairness, but only as an afterthought. In contrast, our main objective is to formalize the notion of recourse fairness. We are the first to motivate and define different variants going beyond the mean recourse cost or “burden” in literature, and consider micro and macro viewpoints when examining subpopulations. Finally, we note that model auditing is typically an offline process, and the runtime is often not an issue. Nonetheless, the runtime of FACTS is in the order of minutes for all datasets and configurations tested, as we report in the pdf attached (Tables 1 and 3). ### Numerical Attributes and Binning We would like to clarify that FACTS indeed works for datasets/models with numerical/continuous features; in fact it works with any mixture of categorical and numerical features. The core idea is that numerical attributes are binned. These bins are used to define subpopulations, e.g., people in the salary range [40K,50K], and actions, e.g., make salary [50K,60K]. So the action "if salary is [40K,50K], make salary [50K,60K]" means that all individuals within that salary range are mapped to their counterfactuals where their salaries are increased by 10K. Binning is *necessary* when defining subpopulations, as we want to explore the entire feature space and present conclusions in a manner that is easily understandable by humans. E.g. compare the interpretability of "there is unfairness against females when married=no, salary in [40K,50K]" with that of "... when married=no, salary=39K or 42K or 45.5K". Binning is also *necessary* when considering actions over numerical attributes. As explained, binning of granularity 10K for salary means that we consider actions that change salary by ±10K, ±20K, etc. We emphasize that because the number of actions/counterfactuals is infinite, *all* methods that work with model-agnostic counterfactual-based explanations (e.g., FACTS, AReS, GLOBE-CE) have to explore only a small set of actions. Note again the distinction between global explainability and auditing for fairness. In global explainability only a few of the explored actions will be selected, whereas we argue that all of them should be used to reliably audit for fairness. Recall that our goal is to approximate the effectiveness-cost distribution, which means we should consider as many actions as possible. Therefore, on the one hand, binning for actions should rather be fine-grained, while on the other hand, binning for subpopulations should be moderately coarse-grained so as to draw conclusions that concern many affected individuals. With that said, the binning granularity for actions and subpopulations can *differ*. For example consider bins of length 10K for subpopulations and 5K for actions. An action, "if salary is [40K,50K], make salary [55K,60K]" means that individuals with salary in [40K,45K] increase their salary by 15K, and individuals within [45K,50K] increase their salary by 10K. In all our experiments, we have shown results using the same binning for actions and subpopulations. Adapting our algorithm to handle differing binning granularities is straightforward. Note that Tables 3 and 4 in the attached pdf present results as we vary the binning granularity. ### Limitations and Potential Societal Impact We would like to thank the reviewers for raising awareness on several aspects, which we intend to discuss in the suppl. material and include in the guidelines for ethical usage of FACTS: - the definition of costs to actions - transparency about the subpopulations explored - how to interpret fairness metrics and auditing results (incl. importance and statistical significance) - need for a user study on interpretability - compliance with GDPR and similar regulations in auditing using protected/sensitive attributes - discussion and study on robustness and sensitivity to malicious actors. [A] GLOBE-CE: A Translation-Based Approach for Global Counterfactual Explanations. Ley et al. ICML 2023. Pdf: /pdf/f13bff8970e8282a3464afea2d260be6e8141471.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes a framework (termed as FACTS) for analysing the recourse fairness of a machine learning model. The work introduces multiple metrics for quantifying recourse fairness - both at micro level (individuals considered separate) and macro level (individuals considered together). Some proposed recourse fairness metrics are argued to be unaffected by the underlying cost metric. Finally, the work demonstrates the proposed framework for a logistic regression model by training it on several datasets. All except one dataset results are present in the supplementary material. Strengths: 1. Analysing model fairness from the perspective of recourse explanations is an important research problem. There exists much work in analysing the fairness of model predictions, but less on analysing if the generated recourses are fair. 2. The paper is very well-written. The motivation, definitions, and metrics are all clearly explained. The authors cover the current metrics clearly, highlight their weaknesses, and then introduce their proposed metrics. 3. Bibliography is extensive and covers most of the recent papers on the related topics. Weaknesses: 1. Experiments section seems insufficient. Please see below for some examples a) The experiments section and the supplementary material presents outputs of the FACTS framework for different datasets, however, it is unclear how to evaluate the efficacy of the proposed framework. It would have been great if the authors had included experiments using toy datasets to create a biased model and then demonstrated that FACTS can uncover the recourse bias while the existing methods don’t/do it with poor coverage. b) I think the current experiments could have been more detailed for example by covering other model classes (e.g., tree-based, neural networks) to understand if the proposed metrics generalise to non-linear and non-differentiable models. c) FACTS is argued to be scalable, interpretable and highly parametrizable (Line 113). However, there are no experiments to support this. It would be great to understand interpretability through a user study for example. Similarly, analysing how FACTS scales to continuous datasets would be very helpful because as it uses an itemset miming algorithms which generally don't scale well with continuous datasets as feature binning creates a large search space [1]. 2. Line 119 - FACTS can explore systematically the feature space. I wonder if there are any guarantees about this? If the feature dimension is high and the dataset has continuous features then the search space becomes too large to scale well. [1] Dan Ley, Saumitra Mishra, Daniele Magazzeni. Global Counterfactual Explanations: Investigations, Implementations and Improvements. ICLR Workshop 2022. Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: 1. It seems to create counterfactual summaries, FACTS mines conjunctions of predicates for all positive samples and then uses them to find counterfactuals. Hwoever, doesn't this approach limit the coverage (effectiveness) because if the positive samples are not diverse (i.e., the feature space is not well covered), we may not find counterfactuals for many individuals, resulting in poor coverage. 2. I think FACTS have many similarities in terms of metrics with [2]. For example, micro and macro vs finding a global direction and scaling it locally. Similarly, ECD vs coverage-cost profiles. I would like to hear authors comment on how their work differs from GLOBE-CE. 3. Some points needs clarification/correction 1. What is "horizontal intervention". lines 342 and line 70 2. I think the term "effectiveness" (Line 193) is same as the term "coverage" used in the context of GCEs? See Ref[3] 3. Further, set of actions A can be considered as diverse CFs. ECD profiles same as in GCEs? Micro vs Macro too in G~LOBE-CE 4. Line 154: I don’t think this claim is correct. AReS[3], GLOBE-CE[2] and Gupta et al. have tried to address this problem before using global (group-level) counterfactuals. 6. Line 161: It would be helpful to understand authors comments on how FACTS compares to other global explanations framework in terms of efficiency. [2] A Translation-Based Approach for Global Counterfactual Explanations. Dan Ley, Saumitra Mishra, Daniele Magazzeni. GLOBE-CE: CoRR abs/2305.17021 (2023) [3] Beyond Individualized Recourse: Interpretable and Interactive Summaries of Actionable Recourses. Kaivalya Rawal and Himabindu Lakkaraju. NeurIPS 2020 Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 4 excellent Contribution: 2 fair Limitations: The authors do not discuss any limitations of their work. However, it is important to note, given that FACTS is proposed as a tool to investigate recourse fairness, detailed experimentation and analysis would be needed before using the tool in the real-world applications. Flag For Ethics Review: ['Ethics review needed: Discrimination / Bias / Fairness Concerns'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1a "experiments using toy datasets to create a biased model and then demonstrated that FACTS can uncover the recourse bias"** Please note that we have provided a toy example in Fig, 1 where we show how burden is not nuanced enough to capture other notions of recourse bias. Most importantly, we have experimented with real datasets with known statistical bias, including Adult, COMPAS and IBM Ad Campaign. Tables 1, 6, 7, 8, 9 demonstrate that burden fails to uncover other forms of recourse bias. A more extensive assessment is presented in Tables 11, 13, 15, 17 and 19. **W1b "covering other model classes (e.g., tree-based, neural networks)"** We note that our method is model-agnostic and does depend on the model class. Nonetheless, we have considered two additional models, XGBoost and a NN, for the Adult dataset. An example CSC is in Figures 1 and 2 in the pdf attached in the author rebuttal. **W1c-i "interpretability through a user study"** This is outside the scope of this work, but we acknowledge the utility of a standardized study to evaluate the interpretability of fairness judgements. We recognize that GLOBE-CE has taken steps towards that direction. **W1c-ii "analysing how FACTS scales to continuous datasets"** We present results reporting the runtime as we vary the number of bins per numerical attribute and the minimum support threshold for frequent itemsets. Refer to Tables 3 and 4 in the pdf. **W2 ""FACTS can explore systematically the feature space." I wonder if there are any guarantees about this. ... the search space becomes too large to scale well."** FACTS is an offline auditing method. Indeed the search space can become large and this is a known limitation of all algorithms that examine subpopulations. We note that a similar approach is used by FairTest for auditing prediction fairness of subpopulations and by AReS to generate a summary of counterfactual explanations. We also note that GLOBE-CE does not examine subpopulations. **Q1-i "It seems to create counterfactual summaries, FACTS mines conjunctions of predicates for all positive samples"** We emphasize that FACTS does not aim to create counterfactual summaries. It uses all counterfactuals discovered to audit for fairness. **Q1-ii "we may not find counterfactuals for many individuals, resulting in poor coverage"** It is indeed possible that we may not find recourse for some individuals. Note however that this holds for any model-agnostic method that examines a finite set of actions and does not explore all, possibly infinite, actions. The more actions we examine the better the approximation of the effectiveness-cost distribution will be. If we observe poor effectiveness (coverage in [A]) in some subpopulations, there are some simple remedies: we may lower the minimum support threshold when mining for actions; we may additionally examine actions generated by a different process, like uniformly sampling the space of possible actions. **Q2 "I think FACTS have many similarities in terms of metrics with [2]. For example, micro and macro vs finding a global direction and scaling it locally. Similarly, ECD vs coverage-cost profiles. I would like to hear authors comment on how their work differs from GLOBE-CE."** We summarize the differences: - FACTS is a method designed to audit for fairness of recourse. GLOBE-CE aims to find the best counterfactual summary, which can be used to audit for fairness at the population level. - FACTS examines subpopulations and may uncover instances of unfairness hidden at the population level. GLOBE-CE can only summarize the recourses of the entire population. - When GLOBE-CE is used to audit for fairness it compares the average cost, aka *burden* in the literature, among the protected subgroups. FACTS is more nuanced, studying in depth the notion of fairness of recourse and making several novel contributions; most notably, it motivates and formalizes a series of fairness metrics, some of which are based on effectiveness and thus oblivious to the cost function, and it defines the micro and macro viewpoints. - When GLOBE-CE is used to audit for fairness it draws conclusions based on the best recourse direction discovered, i.e., it forces all individuals to achieve recourse through actions along one direction. In contrast, FACTS draws conclusions by considering all examined actions. - "Finding a global direction and scaling it locally" is neither the micro (it restricts the actions available to individuals) nor the macro view (it allows for different actions to individuals). - The coverage-cost profile in GLOBE-CE is a *constrained* version of the effectiveness-cost distribution (ECD) in FACTS. Specifically, the coverage-cost profile is an ECD where only actions along a *single* recourse direction are aggregated in the micro viewpoint of FACTS. **Q3 “Some points needs clarification/correction [...]”** 1. An intervention for all individuals. 2. Correct 3. Refer to answer to *Q2* 4. Note that the term subgroup/subpopulation refers to a group of individuals defined by more than one attributes, following the terminology in [17]. Regarding the references: -AReS [30] is a global explainability method. Quoting “It is thus important to be cognizant of the fact that AReS is finally an explainable algorithm (as opposed to being a fairness technique) that is meant to guide decision makers.” Fairness auditing is merely discussed as a potential merit of the method and not systematically treated. -GLOBE-CE [A] is also primarily a global explainability method that can be used for fairness auditing. What is important, is that it does not audit fairness on the subpopulation level. - Gupta et al. [10] similarly considers groups defined by a single protected attribute and not subpopulations. Further, the paper focuses on fairness correction and not auditing. 5. FACTS is not a global explainability method. --- Rebuttal Comment 1.1: Comment: Thank you for providing clarifications and sharing more details/experimental results. I have some comments on some rebuttal comments. Please see them below. Further, I summarise my view of the paper towards the end. ***W1c-i "interpretability through a user study"*** I appreciate authors acknowledgement of the utility of a user study. Please note that, user studies are important to access if CSCs are helpful and interpretable to an end-user (e.g., an auditor). I suggest to avoid making claims on "interpretability" till we quantitiavely evaluate it. As the authors note, GLOBE-CE performs a user study and so does AReS. ***Q2*** - FACTS examines subpopulations and may uncover instances of unfairness hidden at the population level. GLOBE-CE can only summarize the recourses of the entire population. Thanks for the clarification, but I think the above statement for GLOBE-CE is incorrect. Please note that GLOBE-CE aims to find a global direction that translates a given set of data to the desired class. Importantly, the set of data could be entire population or sub-population within it. - When GLOBE-CE is used to audit for fairness it draws conclusions based on the best recourse direction discovered, i.e., it forces all individuals to achieve recourse through actions along one direction. In contrast, FACTS draws conclusions by considering all examined actions. Please note that the above statement is not entirely correct. GLOBE-CE does find a global direction, but providing recourse based on it will result in high cost counterfactuals, hence, the authors propose to scale it locally. This will help to maximise coverage but keeps the cost of recourse low. - The coverage-cost profile in GLOBE-CE is a constrained version of the effectiveness-cost distribution (ECD) in FACTS. Specifically, the coverage-cost profile is an ECD where only actions along a single recourse direction are aggregated in the micro viewpoint of FACTS. Again, I understand, the cost-coverage profile is developed using scaled translation vectors and not only following the global direction. Hence, I believe, ECD and cost-coverage profiles are similar ideas. ***Q3*** 1. “Some points needs clarification/correction [...]” Thanks. I would strongly encourage to reuse the existing terms from literature instead of proposing new terms unless there is a need for it. Another example is set of actions A, which goes by the name **diverse counterfactuals** in literature [1][2]. It would be great if authors clarify in the paper these relations. 4. Note that the term subgroup/subpopulation refers..... I understand. Thanks for the clarification. But, as discussed above, GLOBE-CE works at any level (group, subgroup). It is important to clarify this in the paper. **The authors clarified on multiple questions and shared additional results. I think this further clarifies the contribution of this work. I am happy to increase my score. Further, to avoid ambiguity to a reader, I strongly encourage the authors to clarify in the paper how this work differs from other Global CFs methods in the literature.** [1] https://arxiv.org/pdf/1901.04909.pdf [2] https://arxiv.org/abs/1905.07697
null
null
null
null
null
null
Explore to Generalize in Zero-Shot RL
Accept (poster)
Summary: --- I have raised my score based on the answers and results provided by the authors during the rebuttal. --- This paper proposes an algorithm named ExpGen that can selectively exhibit a maximum entropy exploration behavior at test time by measuring epistemic uncertainty through ensemble of policies. In order to obtain a policy trained to maximize the entropy, the entropy is formulated as an intrinsic reward, where the sample based approximation to state distribution entropy is obtained using a trajectory of states are used as neighbors. The experimental results show that ExpGen is able to achieve highest score in two of the five ProcGen environments used in the paper, namely, Maze and Heist. Strengths: The paper proposes novel framework that combines the idea of switching strategies based on the epistemic uncertainty and leveraging the maximum entropy policy to bridge the gap between the performances during train and test time. Weaknesses: Some parts of the logic are unclear and there are few typos and mistakes in the paper. The experimental draws some what countering argument against the motivation of using the general framework instead of ones based on inductive biases, e.g., IDAAC. This naturally leads to combination of IDAAC and ExpGen, but is mentioned in the paper, but not experimented. Another weakness would be the memory and computational inefficiency, resulting from needing to train and inferencing ensemble of policies. Finally, the empirical studies does not draw a clear picture of why and how ExpGen works, i.e., most results are observed performances of the comprehensive algorithm, lacking a detailed ablation of each components of the algorithm. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - Why is $k$=1 a good choice (neighbor size)? From the formulation, it seems having just one sample would have a large bias, and thus more samples would lead to better performance. - The variable $k$ is used twice to represent the majority volume in ensemble. The values seem to largely vary from task to task. Was there a pattern or intuition behind selecting a good candidate for a given task? - Using difference in RGB images as the difference in states does not quite make sense. Since states within the same trajectory are treated as nearest neighbors, I do not see a clear connection between states that are close in time and close in image space. Shouldn't two have a correlation in order for the entropy estimation to make sense? For example, it would make more sense if the observations are mapped to an embedding space that clusters temporally near states and the embeddings are used to calculate the L2 norm between states. - Considering the computational burden of training ensemble of policies, calculating L2 distance between two images of size 64x64x3 does not seem like a much of a burden. Was there a large time gap when full original image size was used? How does the performance differ when the original image is used versus different down sampling strategies? - is $i$ randomly selected? - Regarding the experiment done in Figure 5, expanding the observed pattern, it seems that lower $\gamma$ and lower $\beta$ may enable PPO to achieve a score similar to ExpGen. Is there an ablation of such? - While the algorithm is designed upon the idea of preference of generalizing over different tasks (as described by $\mathcal{L}_{emp}$) compared to inductive biases, the experimental results does not seem to reflect this argument as the performance increase is seen only in a small subset of evaluation tasks. - As mentioned in 6.1, is there a result of using IDAAC as the base model for ExpGen? Minor remarks. - In Equation 5, it is not specified how and where $x_i$ is sampled from. - Figure 6, legend labels GPU, which should be GRU - Reference section is missing in the paper (but included in supplementary) Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Although limitations are included as part of the discussion, some clear limitation could be directly stated, which should help making a clear distinction of ExpGen to other algorithms. For example, comparing memory and computational efficiency against using designs like PPO with extrinsic and intrinsic reward instead of ExpGen with ensemble. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We include an evaluation of ExpGen+IDAAC (described in the general comment and accompanied by Rebuttal Figure 1), showcasing state-of-the-art performance on all ProcGen environments. Regarding memory and computational inefficiencies: This is a valid point, which we address in the general comment (accompanied by Rebuttal Figure 2). Ablation of ExpGen components: Thank you for this feedback, we will add ablation experiments to the final version. We also wish to refer you to the appendix (section D) for the ablation study. Regarding the choice of the value of k in KNN, we conducted hyperparameter tuning for various values - based on which k=1 yielded the best performance. We will include the evaluation in the final version of this paper. The majority volume of the ensemble k: We treat k as a hyperparameter that we tune. Please see Table 10 in the appendix (section D) for an example of tuning k for the maze environment based on an evaluation for various values of k. In this work we implemented KNN from the RGB image as a way to discern novel states. This was achieved without learned embeddings, but we agree with you that encoding temporally-near states in embedding space has the potential to further improve entropy-maximization performance. With regards to down-sampling the observed state (full image), our implementation of KNN uses down-sampling primarily in order to smooth out noisy pixels in the observation, as well as extract a more concise state representation that is less sensitive to minor changes. It also helped reduce the training time of the entropy-maximizing policy (MaxEnt) by 50\%. Regarding an evaluation of lower values of alpha and beta: Figure 5 illustrates that even the best values are still significantly worse compared to ExpGen. That said, we will include an evaluation of lower values of alpha and beta in the final version. Thank you for your insight on combining ExpGen+IDAAC. We include this experiment in the rebuttal: described in the general comment and accompanied with Rebuttal Figure 1. The ExpGen+IDAAC algorithm achieves state-of-the-art results in high-difficulty environments and yields on-par performance in the remaining environments. --- Rebuttal Comment 1.1: Title: Thanks for the answers Comment: Thanks for the authors' answers. The comment have cleared some of my concerns. I do still have just few more additional question. - The authors mentioned k=1 (neighbor size) being the optimal found through hyperparameter search. I still do not quite understand why larger sample size for sample-based approximation (beyond 1) gives worse performance. Was the performance similar or significantly worse for larger values of k? Do authors have an insight to why this is the case? - As pointed out by other reviewers as well, the use of L0 norm achieves high intrinsic rewards to visually distinguished pixels. Looking at the environments that ExpGen has achieved high scores, namely Maze and Heist, those environment are ones where the L0 norm between two arbitrary states in a trajectory is almost always similar unless a rewarding event occurs (e.g., only opening a door or eating a cheese to make them disappear creates distinguished visual difference) whereas in other environments, the pixels are more dynamically changing, even when not aligned with the objective of the environment. This leads me to believe that experiments show only one side of the proposed method - the relationship between the choice of the metric between states and some specific properties of the environment is more closely tied to the performance increase than the core idea of the proposed algorithm. Then, the claim ties back to the need of using more general measure of novelty (e.g., expected temporal distance from more larger k-NN neighborhood) to show that the proposed method generally works across different tasks and that its capability is not tied to some specific properties of the environment. Thanks to the authors again for the time. --- Reply to Comment 1.1.1: Comment: Thank you for your comments and your time. **The choice of neighbor size $k$ in k-NN:** We added additional experiments to the general comment (accompanied by Rebuttal Tables 4 and 5) that evaluate the MaxEnt score for the Maze and Heist environments for different neighbor sizes $k$. The results show that a small $k$ achieves the best performance (with similar results for $k=1,2,3$), and performance starts to decrease significantly for $k>3$. Our insight is that allowing for large sample size in Maze and Heist produces additional intrinsic rewards for exploring already-visited states, which hurts exploration performance. In the paper, we set the same hyperparameter $k=1$ for all environments (to avoid over-tuning to each specific environment), but we recognize that tuning this hyperparameter for each game separately would improve results. We will add the environment-specific tuning of all games to the appendix of the final version. **The use of L0 norm vs L2 in our experiments:** Thank you for pointing out this valuable insight - we added to the general comment an evaluation of ExpGen using MaxEnt exploration with L2 norm instead of L0 (Rebuttal Tables 2 and 3) and we will add this evaluation to the final version. The additional results for Maze and Heist using L2 distance show that it is not the L0 metric that leads to the improved performance. Moreover, in preliminary investigations on Maze, we also tried a variant that calculates MaxEnt reward using the true state (in maze it is easy to obtain), which yielded similar results. All in all, we believe these results confirm our intuition that the MaxEnt policy (or a reasonable approximation of it using L0/L2) is more difficult to memorize (overfit) than a reward-seeking policy in the ProcGen Benchmark. We believe that the above addresses all the remaining concerns you have raised. Please let us know if there are additional concerns.
Summary: The paper is based upon the key insight, that maxEnt exploratory policies exhibit a much smaller generalization gap than usual reward seeking policies. Previous work introduced the framework of epistemic POMDPs, where a random action is chosen until the uncertainty of the policy is low enough again (quantified by policy ensemble members that agree or disagree). The paper improves upon the previous work by using the maxEnt exploratory policy instead of the random policy when policy ensemble members disagree. This allows to substantially improve generalization performance on ProcGen tasks previous methods fail on (Maze and Heist). Strengths: - Very well written introduction and related work. - Well executed idea to improve weaknesses of previous state-of-the-art algorithm for specific generalization problems of the ProcGen suite. - Interesting and well executed empirical investigation. Weaknesses: **Major Weaknesses** - The usage of L0 norm instead of L2 norm in Eq. (8) is not justified, as to the best of our knowledge, the (log of the) k-NN distance does not approximate the entropy under this norm, but only under the euclidean (L2) norm. Therefore, maximising Eq. (8) might not yield a maxEnt policy. - Evaluation protocol of main experiments is not clear (see questions). **Minor Weaknesses** - In line 72 / 73, it is stated that the exploration policy is GUARANTEED to generalize. The paper does not contain a formal proof, but rather an empirical investigation, therefore this claim should be down-toned to reflect this correctly. **Remarks** - Please check the citation style again, often \cite is used instead of \citep. - Use \eqref to reference equations. - Check notation (e.g. the true state as part of the history in line 128) Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How do you select the k-NN? Is it also done using the L0 norm? - What is your hypothesis why the L0 works better? - How is it justified to use the L0 norm instead of the L2 norm for the k-NN entropy estimator? I.e. the particle based derivation will not work out. - The main point of the paper is, that maxEnt exploration policies transfer better in zero-shot generalization. This should be true irrespective of the final performance. The investigation of this property (Fig. 3 and 6) therefore seems limited, as those are the environments the algorithm actually improves most on. I would raise my score, if the same evaluation could be provided for all environments investigated in the main experiments to get a better view of the empirical significance of this finding. - How are scores of the main experiments in table 1 and 2 computed? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors openly discuss limitations of their approach for other environments of the ProcGen suite, which are better suited for different algorithms that incorporate invariances into the trained policy. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: L2/L0: We used L0 with an intuition that pixel changes are what matters for games like ProcGen, where the agents are localized. This is indeed an approximation, but one that we found to work well. We started an experiment with L2, but did not yet get the results. We remark that L2 is also an approximation (yet indeed a more justified one). We will clarify this point in the text. Regarding lines 72/73: we agree and will tone down this statement We are thankful for your inclination to raise your score. We are working on evaluating the generalization ability of MaxEnt exploration policy on all environments, and expect to finish the experiments during the discussion phase. Regarding the score in Tables 1 and 2: They present the average cumulative reward achieved on the train and test environments. Note that for the train environments, only 200 seeds (levels) are used, and for test environments, a new seed is sampled at every episode (from a pool of all possible seeds). --- Rebuttal Comment 1.1: Comment: Thank you for your additional explanations. However, some questions still remain for me: Could you address the unanswered question about how the k-NN is selected? This should also be stated explicitly in the paper. L2/L0: My intuition about the L0 norm - especially in games like maze where the only thing that changes in the observation during a trajectory is the position of the agent - is that it provides a way of counting unique states. This is rather dissimilar to the L2 norm, which quantifies how much states differ, roughly speaking. In this sense I agree that it can be more meaningful in the shown experiments, but require a lot of knowledge about the problem. Therefore, I would urge to make it more explicit that this is an empirical design choice to improve on very specific environments and would not necessarily hold for a different set of environments, e.g. by adding a paragraph heading to line 191, stating something like "Practical Implementation" or similar. Thank you for the additional details on the score in Tables 1 and 2, however I still don't fully grasp how they are obtained and would ask for a full detailed description, which should also be put into the appendix for future readers. My most pressing questions are, how many test seeds / environemnts are used to calculate the average return. Am I right that the reported STD then is over the average test returns for multiple training runs? How are the train results obtained, averaged over the 200 training seeds? Are the reported results for test and train using the final policy after 25M environment steps or any intermediate ones that might be better? I very much look forward to the additional results on the generalization ability of MaxEnt exploration policies, thank you for your effort! --- Reply to Comment 1.1.1: Comment: The selection of $k$ of the k-NN: The neighbor size k of k-NN was chosen using hyperparameter tuning. The value of k=1 was assigned to all environments because it produced the most balanced performance across all tasks. Having a single value of k prevents over-tuning of unique k's for each task. We recognize that a more granular per-environment choice of neighbor size can potentially increase performance on a per-task basis and we'll add this evaluation for the Maze and Heist environments in the upcoming comment (with all the remaining environments evaluated for varying k added to the appendix). The practicality of L0: We agree with your insight that the L0 norm is more meaningful in the ProcGen benchmark as it simplifies novel-state detection through counting novel states. We will state this more explicitly in the final version under "Practical Implementation details" and will include experiments using L2 norm for comparison in our upcoming comment (the experiments are still running). Clarification on Tables 1 and 2: * Number of Test seeds/environments used for calculating average return: We maintain a buffer of 16K steps obtained by randomly sampling a different seed at the start of each episode from the full distribution of all possible seeds. This means that whenever a game (seed) concludes, e.g. agent death or task completion, the next random seed is drawn. We then compute the average return from the returns of all completed episodes in the buffer. Note that this scheme for computing the average return at test-time is used by the other baselines as well. * Reported STD: Your understanding is correct. The STD is computed over the average test returns (as described above) for 10 training runs (10 different random network-weight initializations). * Train results: We use a buffer of 16K steps and sample at random from the 200 environment seeds of the train set and report the average return of all completed episodes. * The reported results of the final policy: The reported results for test and train are obtained using the final policy after 25M environment steps. The generalization ability of MaxEnt exploration policies: We report the generalization ability of MaxEnt exploration in Rebuttal Table 1 (posted in the general comment above). The table indicates that MaxEnt exploration policies achieve a smaller generalization gap compared to PPO across all ProcGen environments apart from Ninja.
Summary: This paper studies zero-shot generalization in RL. They first make an interesting observation: that intrinsic novelty-based rewards corresponding to maximum entropy exploration exhibit a smaller generalization gap than extrinsic environment rewards on ProcGen games. This suggests that MaxEnt rewards are in some sense richer and harder to memorize, and the agent is less prone to overfitting to them and more likely to generalize its exploratory behavior to new levels. The MaxEnt reward here is implemented using a kNN based method, similar to ProtoRL of Yarats et al. Based on this observation, the paper proposes a new algorithm (ExpGen), which trains a MaxEnt exploratory policy (which, as noted above, should generalize its exploratory behavior well), as well as an ensemble of K exploitation policies using the usual extrinsic reward. At test time, if the ensemble agrees on an action (indicating good generalization), then this action is executed. If not (indicating poor generalization), the exploratory policy (which is assumed to generalize well) is executed instead for a certain number of steps, and the process is repeated. This algorithm is evaluated on the ProcGen benchmark, where it is compared to a number of other published methods (PPO, PLR, UCB-DrAC, PPG, IDAAC, LEEP). On two games (Maze and Heist), it significantly outperforms the other methods. However, it significantly underperforms in several others. Overall, this paper has several things going for it: the insight that MaxEnt reward has more favorable generalization properties is definitely interesting, and I think the ExpGen algorithm (or some variant of it) has potential. However, the experiments do not (yet) convincingly make a case for this algorithm: while it shows advantages in some games, it significantly underperforms in others and its aggregate performance does not seem favorable. This may be fixable — as the authors note, the invariances induced by other algorithms are orthogonal to the contributions of ExpGen, so I suspect the benefits could be combined. I would suggest combining the architectural modifications and auxiliary losses from IDAAC with ExpGen - if this indeed combines the best of both algorithms, the resulting method would be a lot more convincing. This is discussed in the paper, but not done, and in my opinion needs to be tested. Second, there are a number of presentation and/or methodological issues which also need to be addressed. Please see my comments in the Weaknesses section. I think that if all these issues can be addressed, then this would make a strong submission. I think the substance of this paper is good, but it probably requires another revision cycle to be polished enough for publication. **Post rebuttal update: the authors have addressed my two main concerns. They have shown that ExpGen can be combined with IDAAC to get robust and SOTA results across all ProcGen games, and they have shown that the baselines do not improve when given a larger sample budget. Based on this, and assuming the authors will also improve the presentation as promised, I think this is now a strong submission and have raised my score to a 7 (Accept).** Strengths: - The paper addresses an important problem - zero-shot generalization in RL is relatively understudied compared to the singleton MDP setting, but important for many realistic settings - As mentioned above, the insights are original, and the algorithm (also original) follows nicely from them Weaknesses: - As mentioned above, the experiments in their current form are not convincing enough - There is potentially an important methodological issue which needs clearing up, namely ExpGen appears to use many more samples than the other algorithms it is compared to due to the use of several policies. - The presentation could be improved Technical Quality: 3 good Clarity: 2 fair Questions for Authors: My suggestions for improving the paper are as follows: Major: - Try to improve the performance of ExpGen by combining it with algorithmic elements from IDAAC or other methods. As mentioned in the paper, these improvements are orthogonal so hopefully it should be possible — however, it’s not sufficient to hypothesize that the two can be combined, this needs to be validated experimentally. - It may be that ProcGen isn’t the best experimental testbed to showcase the strengths of the proposed algorithm (indeed, several of the games have dense rewards, and exploration bonuses can sometimes actually hurt performance in such settings). An alternative could be MiniHack (https://arxiv.org/abs/2109.13202) — these environments are also procedurally generated, and additionally for the most part have sparse rewards. You can easily replicate the current setup where there is a limited number of seeds at training, and they are also very fast to run. - Another potential testbed is the Habitat embodied AI environment. That has a limited number of training levels and overfitting/generalization is a severe issue. In particular, Object Navigation is currently not solvable via RL and current SOTA requires large-scale IL using expensive human demonstrations (https://arxiv.org/abs/2204.03514) (in particular, because they perform exploration which the agent also needs to do at test time). I think this could be a good fit for the proposed method, and if you could get it to work there that would be very convincing. - Tables 1 and 2 are hard to read due to the many environments and methods. I would suggest displaying aggregate metrics using the RLiable library (https://github.com/google-research/rliable). This will also determine if differences are statistically significant. - I’m concerned the comparison to baselines might not be fair. It is reported that each method trains for 25M steps. However, ExpGen trains not 1 but (K+1) policies. Are each of these trained for 25M steps, or are they trained for 25M / (K+1) steps? To fairly compare to a policy trained for 25M steps, it should be the latter. Please clarify. If each policy was trained for 25M steps, then the baselines should be trained for 25M * (K+1) steps. Minor: - The legend in Figure 3 is quite small and hard to find, which can make the reader confused as to what they’re looking at. I would suggest making it bigger and placing it below the subplots, instead of inside the first one. - The notation in Section 4.1 is confusing: specifically, the Latex rendering of “k-NN” make it look like “k minus NN”. I would suggest using “x^\mathrm{kNN}” or something else that doesn’t use the minus sign. - Figure 5: it is not clear from the caption which task this is for. Please add this to the caption. - The references are currently missing from the main paper (although they show up in the version in the supplement), please fix. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Methodological issue - sample size: Thank you for raising this point. We address it in the general comment (accompanied with Rebuttal Figure 2). Combining ExpGen+IDAAC: This is a valuable insight, one that is expressed by the other reviewers as well. In the experiments described in the general comment (accompanied by Rebuttal Figure 1) we address this point and show state-of-the-art performance in all ProcGen games. Minihack/Habitat: Thank you for this suggestion. We will assess how to extend our evaluation to these domains. That said, we believe our results on ProcGen, including the new results in the rebuttal, already establish that ExpGen makes important progress in zero-shot generalization. Regarding Tables 1 and 2, we'll clarify them and also include aggregate metrics in the revised version. --- Rebuttal Comment 1.1: Title: Thanks, raised my score Comment: Thanks for the response and for running the additional experiments. These have addressed both of my main concerns, therefore I have raised my score to 7 and recommend acceptance.
Summary: This work studies generalization on unseen similar tasks, in a zero-shot manner, and discusses how invariance based approach to overfitting might not work all the time. The algorithm proposed, called ExpGen, has one part that explores the space, while a ensemble of agents are trained to do the reward optimization, and it claims to achieve sota results on ProcGen. Strengths: 1. The paper is well written, and it introduces the problem statement and the challenges faced by the current methods well. 2. There is a good discussion about the related work as well, which helps place the proposed method in a proper context. 3. The set of experiments and tables are clear and the authors have given proper pointers to architecture choices and hyperparameters. 4. The limitations section is also addressed very well. The work talks about the game of Dodgeball where all the methods suffer and some possible insights on things that could be looked into for this. 5. The idea looks promising and can help in adding a useful contribution to the community. Weaknesses: 1. Adding training progress plots can show how the evaluation scores evolved. 2. Figure 4 is very difficult to read. It can also benefit from the more useful captions. 3. The proposed method does not do well in all the games. Other than the invariance, is there anything else that might be a reason for this? And is there a study on improving on these games? 4. For some baselines, the scores from the respective papers seem to have been used. It is possible the methods missed a proper hyperparameter tuning. Clarifying this in the work and doing a proper hyperparameter search and tuning for all methods equally will be helpful. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - Do all methods, including baselines, go through a proper hyperparameter tuning and search phase conducted by the authors? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors do discuss the limitations in their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Training progress: ExpGen combines already trained exploration driven policy with an ensemble of (trained) reward policies at test-time. Therefore the evaluation targets test environments and does not produce figures of the training progress. We will clarify this point, as well as clarify Figure 4 and its caption in the final version. ExpGen performance: In the paper, ExpGen shows a notable benefit in some environments but not in all. We suspected it is due to the limitations of PPO which forms the ensemble of ExpGen. This is validated in our experiment described in the general comment (please see Fig. 1) where we evaluate ExpGen with an IDAAC ensemble, leading to high performance in all environments. Hyperparameters for baselines: this is a good point. For LEEP, we performed our own hyperparameter search, following the advice of the paper's first author (we directly corresponded with him). For some domains, however, we could not recover the performance reported in the LEEP paper, and for those domains, we used LEEP's reported numbers, giving an advantage to LEEP. For IDAAC, we used the published code, and corresponded with the first author to verify that we are using the best hyperparameters (she conducted extensive tuning). Again, for some domains we could not reproduce her results, and in those cases used IDAAC's reported scores, giving IDAAC an advantage. --- Rebuttal Comment 1.1: Comment: Dear reviewer, we believe our response and new results should have addressed all the concerns you raised. If you still have concerns, we would appreciate a chance to address them before the discussion period ends. --- Rebuttal 2: Comment: Thanks to the authors for spending time and efforts on their rebuttal. After getting some more context, I could better understand on why and how some of the choices were made. It is also helpful to understand that the authors tried to reach out to the authors of the respective papers whose results they have reported. However, I am a bit hesitant in increasing the score as without seeing the fully revised final version, it is hard to evaluate the quality of the revised clarifications and notes, especially those related to the figures. I would thus keep my score same as before. The authors can greatly benefit from a proper full revision of their work and make this a valuable contribution. --- Rebuttal Comment 2.1: Comment: We wish to emphasize the following: * The **ExpGen+IDAAC** variant establishes a new state-of-the-art on ProcGen (added on August 10 in the rebuttal's single-page PDF [link](https://openreview.net/attachment?id=suDDDKyW2F&name=pdf)) as it surpasses the previous SOTA in several challenging games and is on-par with SOTA on the rest. In addition, we addressed the reviewer's list of weaknesses in the rebuttal: * Weakness#1: Lack of training figures - Since ExpGen ensembles already trained networks, it does not produce training plots but is rather evaluated at test-time. The training plot of PPO is detailed in the main paper (Fig. 5) and of IDAAC in the rebuttal's single-page [PDF](https://openreview.net/attachment?id=suDDDKyW2F&name=pdf). * Weakness#2: Figure 4 caption - The figure depicts the MaxEnt exploration policy in action: an agent explores the Maze environment by applying its learned optimal policy of "wall-following strategy" ([wall follower](https://en.wikipedia.org/wiki/Maze-solving_algorithm#Wall_follower)). * Weakness#3: ExpGen (PPO ensemble) does not do well in all games - We produced an ExpGen+IDAAC variant that excels in all ProcGen games. You can see the figure and its description and clarification in the rebuttal's single-page [PDF](https://openreview.net/attachment?id=suDDDKyW2F&name=pdf). * Weakness#4. Hyperparameters - We precisely detailed the procedure of obtaining the hyperparameters for ExpGen and the baseline methods in the rebuttal: This included working together with the authors of the other leading algorithms to obtain their best-performing setup. We also provide comprehensive hyperparameter ablation studies for ExpGen itself, detailed in rebuttal tables 2-3 (choice of L0 vs L2 metric) and rebuttal tables 4-5 (choice of $k$ neighbor size of k-NN). All in all, the reviewer can see all of the figures that were requested, alongside their description, clarifications, and notes, which will be added in the final version. Kindly note that this year the NeurIPS rebuttal instructions do not allow to modify the paper PDF, but only upload 1 PDF page. We chose to use this page to address the main concerns with factual answers detailing additional experiments. As the reviewer notes in their own review - our paper is well written. The same writing standard will be used for revising our paper based on the discussion phase.
Rebuttal 1: Rebuttal: Thank you for your valuable insights and suggestions. This paper is the first to incorporate exploration driven behavior at test-time towards generalization in RL, and in doing so, achieves state-of-the-art performance in environments that are widely regarded as challenging by all the leading algorithms (e.g. Maze, Heist, Jumper). We are confident that the research community would benefit from this work, due to the significance of zero-shot generalization for RL, and would attract further research into improving our understanding of the role of exploration towards generalization. We first address all reviewers and describe new results based on the reviewers' suggestions. In our submission, ExpGen forms an ensemble of reward driven PPO agents. On its own, this approach surpasses the state-of-the-art in several challenging ProcGen environments (e.g. Maze, Heist, Jumper). Per the reviewers' recommendation, we combine ExpGen+IDAAC to evaluate variant in which IDAAC reward policies comprise the ensemble (rather than PPO) since IDAAC is the SOTA in all remaining ProcGen environments (e.g., Plunder, Miner, BigFish, etc.) - please see rebuttal Fig. 1. ExpGen+IDAAC outperforms IDAAC alone in several games (still very significant on Heist, Maze, and Jumper) and performs on-par across all others. **We believe these are strong results**. Moreover, this demonstrates that the applicability of ExpGen is not limited to PPO, and that ExpGen can be applied in the future to other, more powerful reward seeking models to infuse them with exploration driven behavior at test time. Sample complexity: Several reviewers raised the concern that ExpGen benefits from training on more environment steps (because of exploration + ensemble policies). Let us explain why this is not the case: An agent can fail at test time either due to bad generalization performance (overfitting, e.g., due to a small number of training domains), or due to insufficient training steps of the policy (underfitting). In this work we are interested in the former, and design our experiments such that no method underfits. The Rebuttal Figure 2 shows IDAAC training for 100M steps, which demonstrates that the best test performance is obtained at around 25M steps, and training for longer does not help (and can even degrade performance). Thus, while it is true that our method requires more samples, **our baselines are not in any disadvantage**. Adding constraints on sample complexity to ExpGen is interesting, but is out of scope for this study, which focuses on unlimited samples, but a very limited set of 200 training levels. Pdf: /pdf/7529183669979f571dad2e3f936b91608aee72cb.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Representational Strengths and Limitations of Transformers
Accept (poster)
Summary: This paper investigates the representation power of attention layers in transformer networks and compares them with other neural network architectures. The authors establish both positive and negative results on the benefits and limitations of attention layers, focusing on intrinsic complexity parameters such as width, depth, and embedding dimension. The positive results include the demonstration of a sparse averaging task where transformers scale logarithmically in the input size, compared to polynomial scaling in recurrent networks and feedforward networks. They also show the necessity and role of a large embedding dimension in transformers. On the negative side, they present a triple detection task where attention layers have linear complexity in the input size. However, they also provide variants of the task that can be efficiently solved by attention layers. The paper's contributions include the formalization of computational limits using communication complexity, the establishment of the representational capabilities and limitations of self-attention units, and the demonstration of the impossibility of computing Match3 using standard multi-headed attention layers in an efficient manner. Strengths: - This paper brings some original contributions to the understanding of attention layers in transformer networks. It provides a mathematical analysis of the representation power of attention layers and compares them to other neural network architectures. The paper introduces some interesting tasks, such as sparse averaging, pair matching, and triple matching, to evaluate the capabilities and limitations of attention layers. This approach introduces fresh perspectives on the benefits and deficiencies of attention layers and offers valuable insights into their role in deep learning. - The paper demonstrates a high level of quality in terms of its analysis and proofs. It presents rigorous mathematical formulations and provides formal definitions for the tasks and architectures under study. The proposed theorems and conjectures are well-supported and backed by detailed proofs, showcasing the authors' expertise in the subject matter. The use of communication complexity techniques adds depth and reliability to the analysis. The paper also includes supplementary proofs and explanations in the appendices, further enhancing the quality and comprehensiveness of the work. - The paper addresses an important gap in the literature by providing a mathematical analysis of the benefits and limitations of attention layers in transformers. The findings have implications for the design and optimization of deep learning models, especially in natural language processing and other sequential tasks. The identification of tasks that highlight the strengths and weaknesses of attention layers can guide future research in developing more efficient and effective architectures. The paper also raises intriguing conjectures, such as the impossibility of efficiently computing Match3, which can inspire further investigations and spark valuable discussions in the research community. Weaknesses: - While the paper provides rigorous mathematical analysis and proofs, it lacks empirical evaluation of the proposed tasks and architectures. Including experiments using real-world datasets could provide practical validation of the theoretical findings and further strengthen the paper's conclusions. Empirical evaluation could also provide insights into the computational efficiency and generalization performance of attention layers compared to other architectures. - Providing concrete examples of how the theoretical findings can be applied to practical deep learning problems, such as natural language processing or computer vision tasks, would enhance the paper's relevance and impact. - While the paper is generally well-written, the mathematical formulations and proofs can be complex and challenging to follow for readers without a strong background in the subject area. Simplifying and clarifying the presentation of the mathematical concepts and providing more intuitive explanations or examples could improve the accessibility of the paper for a wider audience. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see above comments. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > While the paper provides rigorous mathematical analysis and proofs, it lacks empirical evaluation of the proposed tasks and architectures. Including experiments using real-world datasets could provide practical validation of the theoretical findings and further strengthen the paper's conclusions. Empirical evaluation could also provide insights into the computational efficiency and generalization performance of attention layers compared to other architectures. Empirical evaluations on real-world datasets cannot establish fundamental limits or asymptotic separations between different Transformer architectures. While we agree that that computational and statistical properties of transformers can be rigorously studied with empirical methods, these are not the focus of this work, which focuses on approximation capabilities and fundamental limitations. We do include brief experiments in Appendix D, which, while not comprehensive, suggest that transformers have a favorable inductive bias for learning qSA from randomly drawn samples, in contrast to other standard neural architectures like MLPs and LSTMs, which both overfit the training dataset with disastrous generalization. We chose not to emphasize these experiments due to our theoretical focus and the space limitations, but the extra space allocated to the camera-ready version would allow us to present this in the main body. > Providing concrete examples of how the theoretical findings can be applied to practical deep learning problems, such as natural language processing or computer vision tasks, would enhance the paper's relevance and impact. The theory of transformers is not yet at this stage; we are still laying the foundations. (This is similar to early theoretical work on MLPs from Kolmogorov and Arnold from 1957, which characterized a broad class of multivariate functions as a superposition of continuous univariate functions and forms a foundation for future work on the universal approximation of 2-layer MLPs. Recent generalization work about MLPs with more practical implications follows several years of approximation-theoretic work on foundational issues.) > While the paper is generally well-written, the mathematical formulations and proofs can be complex and challenging to follow for readers without a strong background in the subject area. Simplifying and clarifying the presentation of the mathematical concepts and providing more intuitive explanations or examples could improve the accessibility of the paper for a wider audience. We are happy to address clarity issues in the presentation. Can you give any specific pointers?
Summary: The paper presents the representational strength and some limitations of the transformer architecture. 1. Strength: separation between a unit of self-attention and a one-hidden layer neural or a recurrent neural network. The authors present a task where the complexity of the latter networks scale with N (number of tokens) and the self-attention does not. 2. Limitation: the task Match2 can be computed with a self-attention unit that scales with the input dimension d, yet, a modification of the task, Match3, cannot be computed with a single transformer layer. Match3 can be computed with a standard and modified transformer model. The first makes assumptions over the input the second model modifies the self-attention module. Strengths: The paper is very well written. I could easily understand and follow all the definitions and the Theorems presented in the paper's main text. Weaknesses: The weakness of this paper lies in its relevance to the understanding of transformer networks. The paper presents problems qSA, Match2, and Match3, whose relevance to understanding neural networks in practice is unclear. This is even stated in the paper: "Future work by linguists, theoretical computer scientists, and empirical NLP practitioners could assess how foundational our primitives are and study whether there are any practical triple-wise problems that transformer models fail to solve." I think that when studying theoretical aspects of neural networks, it is the responsibility of the authors to motivate their theoretical results and why their assumptions or framework are relevant. Otherwise, such theoretical content belongs in a mathematical venue. Minor comment: When reading Section 1.1 for the first time, I could not follow the authors' intentions/message. Only after carefully reading the rest of the text and definitions could I understand it. As written now, I think it does not convey information properly for someone just interested in having a quick general idea of the results. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: I would be happy to change my mind about the paper since the paper is written technically very well. Better than most theoretical papers I encounter! Can the authors at least give some intuitive explanations why they think the tasks presented in the paper are relevant to NLP or any other ML task? Maybe a small experimental study on a small transformer network and show that the functions qSA, Match2, and Match3 somewhat resemble the functions the transformer networks actually compute? More so, through the years, I haven't encountered a single theory paper about the representational power of neural networks that actually sheds any light on understanding neural networks better in the practical sense of explaining their performance (other than the original universal approximation paper). For example, one can study the VC dimension of different architectures, but this does not bring us any step closer to understanding neural networks in practice. In its current form, I think the paper is suited to a pure theoretical venue such as a math journal or a theoretical CS conference. To make it more suitable for an ML venue, a reasonable amount of effort is required to motivate theoretical setup and questions. In that case, it's likely that the scope of the paper would not fit a conference paper but in a journal. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: The authors adequately addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Minor comment: When reading Section 1.1 for the first time, I could not follow the authors' intentions/message. Only after carefully reading the rest of the text and definitions could I understand it. As written now, I think it does not convey information properly for someone just interested in having a quick general idea of the results. We appreciate this comment, and upon revisiting the section, we recommend a few paragraphs the beginning of the section to clarify our contributions before introducing specific details and notation about tasks. Please see our response to reviewer iCyX for our planned revision of Section 1.1 in the camera-ready version. > Can the authors at least give some intuitive explanations why they think the tasks presented in the paper are relevant to NLP or any other ML task? Maybe a small experimental study on a small transformer network and show that the functions qSA, Match2, and Match3 somewhat resemble the functions the transformer networks actually compute? Our intuition was primarily shaped by reading papers that analyzed the self-attention matrices on NLP tasks, such as the following: Clark K, Khandelwal U, Levy O, Manning C. What Does BERT Look At? An Analysis of BERT's Attention. In ACL 2019. Rogers A, Kovaleva O, Rumshisky A. A Primer in BERTology: What We Know About How BERT Works. In ACL 2020. Chen N, Sun Q, Zhu R, Li X, Lu X, Gao M. CAT-probing: A Metric-based Approach to Interpret How Pre-trained Models for Programming Language Attend Code Structure. In EMNLP 2022. The qSA task was motivated by the fact that most self-attention matrices appear to be either wide-scale averages over a large fraction of tokens (where most inputs average over the same collection of other inputs and the identity of the token has little bearing on which elements it is associated with), or sparse matrices where different inputs map to different averages of outputs. These often correspond to intuitive linguistic relationships, such as those linking antecedents and coreferences. We were interested in the limitations of this sparse collection of individual linkages, and we crystallized sparse averaging as a way to make this problem concrete. As noted by Rogers et al, grammatical structures and syntax are directly encoded in self-attention matrices, which can in turn be decomposed into sentence diagrams or trees over multiple layers. We see Match2 as a fundamental unit of that phenomenon, since trees are easily decomposed into combinations of pairwise matches. We also see a more direct motivation for Match2 in co-reference resolution. On the other hand, we see Match3 as an operation that does appear in standard NLP tasks, and that when it does, it's analogous to Match3Assist or Match3Local. Rather, we considered Match3 to represent a family of sequential learning problems that we expect to be much more difficult to solve than real-world language problems. > I think that when studying theoretical aspects of neural networks, it is the responsibility of the authors to motivate their theoretical results and why their assumptions or framework are relevant. Otherwise, such theoretical content belongs in a mathematical venue. We agree with the general standard---applicable to all papers and not just "theoretical papers"---of providing motivation and justifying assumptions and frameworks, and our paper does meet this standard. The motivation to study transformers is given throughout the introduction of the paper, along with specific justification for our analysis framework and scaling regimes. Understanding fundamental capabilities and limitations of transformers is of interest to researchers studying these models, and the types of separation results we proved have a long history of being presented at machine learning venues, including NeurIPS, over the past few decades. We don't agree that "theoretical papers" are subject to a different standard for NeurIPS compared to other types of papers, as the [NeurIPS Call for Papers](https://neurips.cc/Conferences/2023/CallForPapers) explicitly welcomes learning theory papers without stipulating any such qualifications. > More so, through the years, I haven't encountered a single theory paper about the representational power of neural networks that actually sheds any light on understanding neural networks better in the practical sense of explaining their performance (other than the original universal approximation paper). For example, one can study the VC dimension of different architectures, but this does not bring us any step closer to understanding neural networks in practice. While it is true that theory has failed to provide an end-to-end story for deep learning (e.g., why gradient descent applied to transformers can lead to good question-answering abilities), it has provided many useful *suggestions* and *mental models* for reasoning. For example, capacity theory (such as generalization bounds) motivate the use of regularization and weight decay, which are still used in modern architectures. The classical theory of universal approximation, while lacking effective bounds and guidance for architecture selection, do sanity check the enormous representational power of deep networks. In our case, while our theorems are purely representational, in appendix D we verify empirically that qSA is a problem efficiently learned empirically by transformers, but not by other architectures (as evidence by their poor *test* error); tying this back, while perhaps the reviewer feels our work does not have explicit empirical suggestions, it motivates a problem which does capture some empirical benefits of transformers over their predecessors. --- Rebuttal Comment 1.1: Comment: Thank you for your response! **"The qSA task was motivated by the fact that most self-attention matrices appear to be either wide-scale averages over a large fraction of tokens (where most inputs average over the same collection of other inputs and the identity of the token has little bearing on which elements it is associated with)"** Can you refer me to the specific paper and the location where such a statement is made/supported? **"As noted by Rogers et al, grammatical structures and syntax are directly encoded in self-attention matrices, which can in turn be decomposed into sentence diagrams or trees over multiple layers. We see Match2 as a fundamental unit of that phenomenon, since trees are easily decomposed into combinations of pairwise matches."** I'm unfamiliar with sentence diagrams or trees over multiple layers (but I'm happy to learn). Can you help me and refer me to the exact location of Rogers et al. where your claim is supported (since it's a survey of over 150 studies)? **We agree with the general standard---applicable to all papers and not just "theoretical papers"---of providing motivation and justifying assumptions and frameworks, and our paper does meet this standard. The motivation to study transformers is given throughout the introduction of the paper, along with specific justification for our analysis framework and scaling regimes. Understanding fundamental capabilities and limitations of transformers is of interest to researchers studying these models, and the types of separation results we proved have a long history of being presented at machine learning venues, including NeurIPS, over the past few decades.** I totally agree that you mention the above in your text! I only had a problem with the lack of support for the tasks you considered. I did not see references for Clark K, Khandelwal U, Levy O, Manning C. What Does BERT Look At? An Analysis of BERT's Attention. In ACL 2019. Chen N, Sun Q, Zhu R, Li X, Lu X, Gao M. CAT-probing: A Metric-based Approach to Interpret How Pre-trained Models for Programming Language Attend Code Structure. In EMNLP 2022. These works provide some motivation for the tasks, as you replied in your answer. Did I miss it in the main text? or some other form of explanation? **We don't agree that "theoretical papers" are subject to a different standard for NeurIPS compared to other types of papers, as the NeurIPS Call for Papers explicitly welcomes learning theory papers without stipulating any such qualifications.** I agree with you about the standard for NeurIPS. It was under the assumption that the task at hand was unmotivated and unrelated to practical transformers. **in appendix D we verify empirically that qSA is a problem efficiently learned empirically by transformers, but not by other architectures (as evidence by their poor test error); tying this back, while perhaps the reviewer feels our work does not have explicit empirical suggestions, it motivates a problem which does capture some empirical benefits of transformers over their predecessors.** After a quick glance, and I might be wrong, it seems the task you empirically examined is very artificial. Might it be the case that with actual sentences embedded with word2vec, for example, a fully connected network (or other non-transformer) can learn qSA? If not, I think this would establish your case much better because right now, it might be that fully connected networks fail because they are presented with an unnatural and unrealistic distribution. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful review and comments. **Regarding your query about self-attention matrix sparsity patterns:** The appearance of self-attention matrices in Figure 3 of Rogers et al and Figure 1 of Likhosherstov et al suggest that the outputs of softmax attention units resemble either sparse matrices with complicated patterns or low-rank non-sparse matrices. Additionally, the sparsity assumption made in Likhosherstov et al aligns with this observation. Likhosherstov V, Choromanski K, Weller A. On the Expressive Power of Self-Attention Matrices **On the encoding of grammatical structures in self-attention matrices by Rogers et al:** We direct your attention to Section 4.3 of Rogers et al. In particular, the work by Hewitt and Manning using structural probes has identified an iterative encoding of syntactic trees in embeddings of intermediate layers of ELMo and BERT models. Hewitt J, Manning C. A Structural Probe for Finding Syntax in Word Representation. **About the references Clark et al (2019) and Chen et al (2022):** We regret that these papers were not cited in our submission, since we opted to present the work with a primary focus on the theoretical results. While indirectly, these papers provided inspiration for the tasks we chose to construct, and we think that the inclusion of these citations when introducing the tasks and sharing open problems will improve readability. The initial inclusion or exclusion of certain references was to maintain clarity and focus, but we acknowledge that broadening our citation scope might enhance our paper's context and readability. We will take this into account in our revisions. **Regarding the qSA task:** The purpose of the qSA task is to crystallize the intrinsic capabilities and limitations of transformers, and we acknowledge that from the standpoint of NLP research, the task is artificial. The empirical results we presented aimed to demonstrate that transformer architectures are particularly suited for this task, as previously requested by the reviewer; the intention of these experiments is not to establish a relationship to linguistic tasks. While we appreciate your point about the potential capabilities of other architectures under different embeddings, our focus was to elucidate the unique strengths of transformers by providing a simple and concrete task that differentiates the representational abilities of different architectures. It's unclear to us why a word2vec embedding would help a fully connected network learn qSA (when the qSA task is not actual sentences) or what the experiment would reveal about the limitations of different architectures. Thank you for your feedback, and we hope this addresses your concerns.
Summary: This paper focuses on the representational capabilities of attention layers in transformer models and showcases both the strengths and limitations. On one hand, the paper proves that attention layers excel at the presented sparse averaging task compared to RNNs and FNNs. On the negative side, they have complexity scaling linearly in the input size for the triple detection task. The paper theoretically showcases the strengths and limitations of the expressivity of attention layers, especially the role of embedding dimension of the attention layer. Strengths: To my knowledge, the presented theoretical results are novel in the study of transformers and their representational capabilities. However, this work is out of my expertise and I cannot go into depth about the significance of the work. I do appreciate the authors in providing some empirical evidence about the theoretical findings of the work. Weaknesses: As said previously, this is not within my area of expertise. I am however interested in how relevant/connected are the proposed tasks (e.g. sparse averaging and triple detection) to empirical studies such as language or visual data modeling. I think the work would be improved if more insights could be provided as to how the theoretical results can be connected to real-world practices. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: My questions and suggestions are listed above. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: This is a theory paper and the authors provided open directions for future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > As said previously, this is not within my area of expertise. I am however interested in how relevant/connected are the proposed tasks (e.g. sparse averaging and triple detection) to empirical studies such as language or visual data modeling. I think the work would be improved if more insights could be provided as to how the theoretical results can be connected to real-world practices. We agree that it would be very interesting to connect the tasks to real-world problems and practices. However, it is also independently interesting to establish fundamental capabilities and limitations of transformers on mathematically precise settings. See the response to reviewer SrVL for an explanation of our intuition for selecting these tasks. In short, the sparse averaging task was inspired by analyses of the sparsity patterns of attention matrix softmax outputs. We study Match2 because we see a connection between Match2 is a useful primitive for language tasks like coreference resolution and because tree-like grammars can be reconstructed by applying Match2 multiple times; in contrast, Match3 is an analogous primitive that transformers apparently fail to represent without contextual clue (e.g. Match3Assist or Match3Local). Standard literature on coreferences may be found [at this page hosted by the stanford NLP group](https://nlp.stanford.edu/projects/coref.shtml); we will revise our work with this discussion and appropriate citations.
Summary: This paper mainly investigates the inductive biases of attention-based models. They propose three computational tasks that show the limitations of Transformers, namely, sparse averaging, pair matching, and triples-matching. Specifically, they analyze the representational power of embedding dimensions and show that the sparse averaging task scales more favorably in the input size as opposed to the other two tasks. Various proofs are provided to support the investigations and conclusions. Strengths: * Interesting and relevant topic on interpreting the Transformer computations and learning progress. * Good connections with relevant prior works in setting the backgrounds for inspiring the tasks and their practical implications proposed Weaknesses: * Initially the notation is a bit confusing, especially in section 1.1 that details the contributions. Variables $y$ and $z$ should be more clearly defined and explained when mentioning results of the theoretical analysis. * The intro can maybe be reworked to slowly work in the details and notation of the method instead of initially presenting the mechanisms that the different tasks enable Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: * It seems somewhat obvious that the representational capacity of performant Transformeres is related to the embedding space and operations for comparison performed on thus. Have the authors experimented with different contexts aside from the embedding space? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Initially the notation is a bit confusing, especially in section 1.1 that details the contributions. Variables and should be more clearly defined and explained when mentioning results of the theoretical analysis. Upon a closer reading, we see that we use a large amount of condensed English and notation, which hinders a first reading. We will gladly refine and expand this section, using some of the extra space granted with the extra camera ready page. Would the reviewer like to point out any specific frustrations in Section 1.1? More broadly, we think that adding a few paragraphs the beginning of the section to clarify the paper's contributions before introducing the task formulations would help remedy the issues identified by this reviewer and SrVL. 1. Brief description of the transformer architecture - Simple mathematical formulation of a self-attention unit (without MLPs to keep it simple) - Overview of how transformers are assembled by composing these units in parallel and in series - Introduction of all relevant architectural resources and their corresponding variables: number of heads, depth, embedding dimension, bit-precision 2. Overview of our main findings (without getting into details of the problem) - Goal of the paper is to identify tasks that cleanly separate the abilities of different neural architectures with a focus on transformers: what self-attention can do that other models cannot, how embedding dimension modulates approximation power, fundamental limitations of self-attention can do - The methodology of the paper is to introduce such tasks and prove upper and lower bounds the resources necessary for neural architectures to solve those tasks > The intro can maybe be reworked to slowly work in the details and notation of the method instead of initially presenting the mechanisms that the different tasks enable > It seems somewhat obvious that the representational capacity of performant Transformeres is related to the embedding space and operations for comparison performed on thus. Have the authors experimented with different contexts aside from the embedding space? We agree that it is intuitive that results such as ours should be true, but they have not been proved for transformers before our work. Our investigation also reveals fundamental benchmark problems (e.g., qSA, Match2) that we expect to be useful in future studies of transformers and related architectures. We abstracted away many aspects of transformers and embeddings, and this certainly poses interesting questions for future work. (Our particular choices are in part motivated by the practical scaling trends discussed at the top of Page 2 of our paper.) We would very much appreciate suggestions for other aspects of embeddings that might be amenable to mathematical analysis.
Rebuttal 1: Rebuttal: We thank the reviewers for their detailed and thoughtful feedback on our submission. We are grateful that the reviewers largely appreciated the strength and value of our fundamental theoretical contributions while identifying areas of improvement. We agree that some aspects of the presentation of the paper can be improved, and our replies to specific comments detail our plan to do so. If any questions are unanswered or our responses are unclear, we would appreciate the chance to engage further with our reviewers. Briefly, the key points of our response are the following: 1. **Precision analysis:** Reviewer bhmz asked about whether our results can be adapted to constant bit precision. We appreciate the constructive feedback, and while considering it, realized that our analysis of the bit precision of our positive sparse-averaging result can be significantly sharpened (from logarithmic to doubly logarithmic dependence on sequence length). We explain in detail in our response below why we believe this bit precision bound to be nearly optimal. 2. **Clarity and notational issues:** Reviewers iCyX and SrVL both noted a difficulty in understanding the framing of our contributions in the introductory Section 1.1, in particular due to the mathematical notation. We appreciate the reviewers calling attention to this issue, and we will add a few paragraphs (outlined in the response to iCyX) to clarify our contributions without dense notation. 3. **Omission of third-order tensor self-attention:** We acknowledge the oversight that third-order tensor attention was insufficiently discussed in the main paper body raised by reviewer bhmz. We regret the omission, and we intend to remedy it with the additional page permitted for the camera-ready version. 4. **Relevance to practical tasks:** Reviewers jpuU, SrVL, and CgC3 requested clarification on our motivation for choosing the qSA, Match2, and Match3 problems and the relevance of our work to empirical training of transformers. While our goal was to define tasks that clearly delineate the capabilities of different architectures, we agree that additional context would help readers understand our contributions. We outline our intuitions about why these tasks are relevant in the response to SrVL, and we call attention to our brief empirical results in Appendix D. Once again, we are grateful for the time and effort put into reviewing this submission, and we firmly believe that these comments will strengthen the clarity and motivation of our manuscript.
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors study the representational power of transformers. They quantify how the transformers are superior to other neural network architectures, as well as the limitation of the transformer architecture. They focus on the $q$-sparse averaging (qSA) task which amounts to averaging $d$-dimensional input vectors over subsets of size $q$ over a set of size $N$. They show by an explicit construction a unit of self-attention with $m \geq q$ dimensional embedding can approximate qSA. Using tools from communication complexity between two parties, they show fully connected networks require $\Omega(Nd)$ hidden layers, and recurrent neural networks require $\Omega(N)$ bits of information. They further show that standard transformers can approximate functions that intrinsically depends on input pairs, while failing to approximate functions that depends on triplets instead. Finally, they propose a variant of transformers that can approximate the triplet-functions. Strengths: - The authors leverage the communication complexity to show that the self attention layers add more representational capacity as compared to fully connected and recurrent neural networks. - They use Match2 and Match3 problems to show rigorously that 2nd-order functions represent the threshold of efficient approximability for transformers. - The variants of Match3 problems hint at transformers leveraging some local structures present in the data to go beyond pair matching. This reconciles the Match3 impossibility result with the much presumably higher order successes of transformer models. - They additionally show higher order transformers can solve the higher order function approximations. Weaknesses: - The fixed precision (Theorem 2) result still uses precision that needs to grow with the data size and approximation parameters. Although it is a good first step, it leaves open the approximability with constant precision (which is used in most practical cases). - The ``third-order tensor self-attention" is an important claim in the paper, but it is fully omitted in the main paper. It will be good to at least introduce the basic ideas in the main paper. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - Given a fixed constant precision $p$ what is the bottleneck in creating the transformers that can approximate q-SA? What if $z_i$s are themselves restricted to some $O(p)$ precision? - Is the use of communication complexity in the approximability of Transformers (and broader Neural networks) novel in this work? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: It is hard to foresee any potential negative societal impact of this theoretical work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The fixed precision (Theorem 2) result still uses precision that needs to grow with the data size and approximation parameters. Although it is a good first step, it leaves open the approximability with constant precision (which is used in most practical cases). We agree that constant precision is an interesting case to consider, and we thank the reviewer for raising this as a potential area of improvement. When reassessing our positive result in Theorem 3, we realized that the bit precision analysis can be improved to $p = \Omega(\log(q\log(N) \epsilon))$, which should partially address the reviewer's question by reducing the dependence on $N$ to doubly-logarithmic. In brief, the sharpening proceeds by augmenting the proof with the following analysis: - Under $p = \Theta(\log (q\log(N) / \epsilon))$-bit precision, each query vector $w_y$ can be quantized with a $p$-bit floating point vector that that approximates it to accuracy $\text{poly}(\epsilon / (q \alpha))$. - Hence, we can ensure that the computed inner products satisfy $\langle u_{i'}, w_y \rangle \in [1 - \text{poly}(\epsilon / (q \alpha)), 1+ \text{poly}(\epsilon / (q \alpha))]$ if $i' \in y$ and $\langle u_{i'}, w_y \rangle \in [\frac12 - \text{poly}(\epsilon / (q \alpha)), \frac12 + \text{poly}(\epsilon / (q \alpha))]$. - Propagating this change forward, we can ensure that $\text{softmax}(\phi(X) QK^T \phi(X)^T)\_{i, i'} \in [\frac{(1 - \epsilon/2)}q, \frac{(1 + \epsilon/2)}q]$ if $i' \in y_i$ and $\text{softmax}(\phi(X) QK^T \phi(X)^T)\_{i, i'} \leq \frac{\epsilon}{2N}$ otherwise. While $\log N$ bits are needed to accurately represent the latter quantity, a lower bit-complexity would simply round the term to zero, which does not hinder the ability of $f(X)$ to approximate $\text{qSA}(X)$. The same quality of approximation is thus recovered without requiring $O(\log N)$-bit precision. However, we would also like to note that we think it's reasonable to consider a $p = \Theta(\log N)$ scaling for bit-precision. Trained self-attention matrices frequently compute non-sparse averages over all inputs (see Figure 13 in the supplement of Jumper et al, 2021), where the softmax outputs approximately $\frac1N$ for each input token. Since transformer implementations use floating-point arithmetic, one can assume that the precision of the floating point is be large enough to ensure that $\frac1N$ is not rounded to zero. On a similar note, we do not expect our bounds to work with less than $O(\log q)$-bit precision, since the aim is to approximate a function that computes an average over $q$ elements. Jumper J, Evans R, Pritzel A, Green T, Figurnov M, Ronneberger O, Tunyasuvunakool K, Bates R, Žídek A, Potapenko A, Bridgland A. Highly accurate protein structure prediction with AlphaFold. Nature. 2021. > Given a fixed constant precision what is the bottleneck in creating the transformers that can approximate q-SA? What if z_is are themselves restricted to some precision? See above. > The "third-order tensor self-attention" is an important claim in the paper, but it is fully omitted in the main paper. It will be good to at least introduce the basic ideas in the main paper. Thank you for the suggestion; we'll do just that. (The additional content page in the camera-ready should more than suffice.) > Is the use of communication complexity in the approximability of Transformers (and broader Neural networks) novel in this work? We have not seen communication complexity used in the context of Transformers before, but it has been used to prove lower bounds for neural networks (and circuits/formulas) before. We'll make sure to cite these and other works in the camera-ready version. Karchmer M, Wigderson A. Monotone circuits for connectivity require super-logarithmic depth. In Proceedings of the Twentieth Annual ACM Symposium on Theory of Computing, 1988. Martens J, Chattopadhya A, Pitassi T, Zemel R. On the representational efficiency of restricted Boltzmann machines. Advances in Neural Information Processing Systems, 2013. Vardi G, Reichman D, Pitassi T, Shamir O. Size and depth separation in approximating benign functions with neural networks. In Conference on Learning Theory, 2021. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: I thank the authors for their helpful comments. The lowering of the precision required for their bounds is welcome. The argument sounds reasonable (although not fully verified). Overall my concerns are addressed. These concerns did not influence my evaluation much. I will maintain my score.
null
null
null
null
null
null
Equivariant Spatio-Temporal Attentive Graph Networks to Simulate Physical Dynamics
Accept (poster)
Summary: This paper introduces an E(3)-invariant temporal attention scheme, calculated with the help of discrete Fourier transform, within the E(3)-equivariant GNN framework. The overall idea of considering higher-order temporal effects in physics is sound, and the formulation appears to be correct. There are a few typos that do not affect the overall scoring, I would recommend the authors do a full proofreading. There may be missing references and potentially a missing benchmark to compare with. I recommend adding these (in **Weaknesses**). Overall, this work is solid, and I recommend accepting it for Neural IPS 2023. Strengths: The idea presented in the paper is novel, although not groundbreaking. It fills a gap in the existing framework and the direction is practical and meaningful. There are empirical improvements. The illustrations are very easy to follow. Weaknesses: There are some typos in the paper, such as the missing year in reference [16]. I recommend thorough proofreading. —Lack of previous SOTA for comparison— [1] Chen, Runfa and Han, Jiaqi and Sun, Fuchun and Huang, Wenbing. "Subequivariant Graph Reinforcement Learning in 3D Environments". Link: https://arxiv.org/abs/2305.18951 —Lack of reference for future improvements— One future direction I have, which has already been used in [1], is combining equivariance with multi-scale (MS) GNN, as most industrial-level applications involve huge graphs. Therefore, the following papers should be cited as future works. Note that [2] also combines equivariance with MS, similar to [1]. However, since there are significant differences in graph type and application needs between this paper and [2], it is not suggested to compare them directly (but they should still be cited). [2] Lino, Mario and Fotiadis, Stathi and Bharath, Anil A and Cantwell, Chris D. “Multi-scale rotation-equivariant graph neural networks for unsteady Eulerian fluid dynamics”. Link: https://pubs.aip.org/aip/pof/article/34/8/087110/2847850 [3] Cao, Yadi, Menglei Chai, Minchen Li, and Chenfanfu Jiang. "Efficient learning of mesh-based physical simulation with bi-stride multi-scale graph neural network.". Link: https://openreview.net/forum?id=2Mbo7IEtZW [4] Meire Fortunato, Tobias Pfaff, Peter Wirnsberger, Alexander Pritzel, Peter Battaglia. “MultiScale MeshGraphNets”. Link: https://arxiv.org/abs/2210.00612 Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Please refer to the **Weaknesses** section. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Please see the comments regarding "combining equivariance with multi-scale" in the **Weaknesses** section. The attention module for higher-order temporal relationships will significantly increase complexity, which may limit the application in industrial scenarios with huge graphs. The author should either acknowledge this limit, and/or analyze potential remedies for this overhead. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks! Your feedback is instrumental in strengthening our paper. >Q1: There are some typos in the paper, such as the missing year in reference [16]. I recommend thorough proofreading. Thank you very much for the mentioned typos, and we will fix them and proofread our paper carefully. >Q2: —Lack of previous SOTA for comparison— Thank you for raising the related paper [A], which will be cited in the revised paper. Notably, the reference [A] clearly differs from our paper in two aspects: Firstly, [A] mainly incorporates equivariance into Reinforcement Learning (RL) for morphology-agnostic locomotion learning, whereas our paper aims at equivariant dynamics simulation. Secondly, both the policy and Q functions used in [A] still belong to the frame-to-frame prediction paradigm and are constructed under the Markovian assumption, while our model is of the spatio-temporal form to pursue non-Markovian modeling. We will add the above discussions to the revised paper. >Q3: —Lack of reference for future improvements— Nice suggestion! We agree that combining equivariance with multi-scale (MS) GNN is valuable particuarly for industrial-level applications involve huge graphs. We are willing to cite and discuss the mentioned papers [B-D], and consider equivariant MS GNN as a future exploration direction. >Q4: On limitations. Thanks for the suggestion. We will discuss the efficiency issue for industrial scenarios with huge graphs. As suggested by the reviewer, exploring multi-scale architectures upon our model is potential to reduce the complexity overhead, which will be acknowledged in the revised paper. [A] Chen, Runfa and Han, Jiaqi and Sun, Fuchun and Huang, Wenbing. "Subequivariant Graph Reinforcement Learning in 3D Environments". [B] Lino, Mario and Fotiadis, Stathi and Bharath, Anil A and Cantwell, Chris D. “Multi-scale rotation-equivariant graph neural networks for unsteady Eulerian fluid dynamics”. [C] Cao, Yadi, Menglei Chai, Minchen Li, and Chenfanfu Jiang. "Efficient learning of mesh-based physical simulation with bi-stride multi-scale graph neural network.". [D] Meire Fortunato, Tobias Pfaff, Peter Wirnsberger, Alexander Pritzel, Peter Battaglia. “MultiScale MeshGraphNets”. --- Rebuttal Comment 1.1: Title: Reply Comment: Thanks for your reply. I made a mistake in suggesting [A] to you. It should have been Learning Physical Dynamics with Subequivariant Graph Neural Networks (NeurIPS 22), https://arxiv.org/pdf/2210.06876.pdf Considering the time is not enough for you to add any comparison before the discussion period. Now I can only suggest that the correct version be cited now. If you got accepted in the end, you should try comparing to this one in the final revision. Best, --- Reply to Comment 1.1.1: Title: Thank you for your feedback Comment: Thank you for your prompt response! Your feedback is very valuable. The suggested paper effectively utilizes hierarchy and multi-scale techniques in the analysis of large-scale graphs, which can further enhance our research. We will make sure to cite and compare it in the final version of our paper.
Summary: This paper addresses the Markov limitation of previous methods in simulating physical dynamics by treating it as a spatio-temporal prediction task. The authors propose Equivariant Graph Neural Networks (GNNs) to account for the non-Markovian nature of the systems. Additionally, they design three components to extract spatio-temporal features while preserving equivariance. The experiments conducted on three real datasets demonstrate that the proposed method surpasses previous approaches and validate the effectiveness of the three designed components. Strengths: 1. Identification of Markov limitation: The paper recognizes the Markov limitation present in previous methods and appropriately considers the non-Markovian nature of the systems. This approach is well-founded. 2. Equivariant property preservation: The designed components successfully maintain the equivariant property while extracting spatio-temporal features. The paper also provides theoretical evidence of the proposed Equivariant Spatio-Temporal Attentive Graph (ESTAG) being E(3)-equivariant. 3. Comprehensive experimental validation: The paper includes a substantial number of experiments to substantiate the proposed methods. The results consistently demonstrate the superiority of the proposed approach over alternative methods and confirm the effectiveness of the designed Equivariant Discrete Fourie rTransform (EDFT), Equivariant Spatial Module (ESM), and Equivariant Temporal Module (ETM). Weaknesses: 1. Lack of clarity regarding EDFT: The paper does not provide sufficient explanation of how EDFT improves prediction accuracy. It is important to clarify the underlying mechanisms and intuition behind the proposed component. 2. Need for additional experiments: It would be valuable to conduct further experiments to explore the performance of the proposed method in long-term recurrent forecasting. Providing results in such scenarios would enhance the understanding of the model's capabilities. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors do not address the limitations. Please refer to weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for your positive and constructive comments, and provide the answers to your questions below. >Q1: Lack of clarity regarding EDFT: The paper does not provide sufficient explanation of how EDFT improves prediction accuracy. It is important to clarify the underlying mechanisms and intuition behind the proposed component. Sorry for the insufficient clarity. Our use of EDFT is inspired by the observation of molecular trajectories on MD17. As already visualized in Figure 1 in the supplementary material, the molecular dynamics exhibit certain periodicity in terms of different frequencies. This motivates us to first transform the trajectories from time domain to frequency domain, and then compute the inner-product between different frequencies in Eq.4. In signal processing, a well-known theorem is that the Fourier transform of cross-correlation between two signals is equal to the inner-product between the Fourier transform of two signals. In this sense, Eq.4 is able to measure the cross-correlation (thus similarity) between any two trajectories in the frequency domain, and thus can be regarded as the adjacent value between nodes in the message passing in Eq. 6. Similarly, Eq. 5 computes the amplitude which is regarded as a node feature in the message passing in Eq.7. We will include the above explanations into the revised version. >Q2: Need for additional experiments: It would be valuable to conduct further experiments to explore the performance of the proposed method in long-term recurrent forecasting. Providing results in such scenarios would enhance the understanding of the model's capabilities. Nice suggestion! We additionally explore the performance of the proposed method in long-term recurrent forecasting as suggested. The setting in our current paper predicts only one frame at time $T$. Here we recurrently predict the future frames at time $T, T+\Delta t, T+2\Delta t, \cdots, T+10\Delta t $ (the value of $\Delta t$ follows the setting in the paper) in a rollout manner, where the currently-predicted frame will be used as the input for the prediction of the next frame, within a sliding window of length $T$. Note that the recurrent forecasting task is more challenging than the original scenario, and we need to make some extra improvement to prevent accumulated errors over time. Particularly for our method, we change the forward attention mechanism to be full attention mechanism (namely replacing $t$ with $T-1$ in the superscript of the summation of Eq.10 and Eq.12), as we find that the foward attention is prone to biased prediction under the recurrent setting. The results are reported in Figure A5 (General Response), where we verify that the rollout version of ESTAG delivers generally smaller MSE than all compared methods for all time steps. We will contain the evaluation of the long-term recurrent forecasting in the revised paper. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I value both the quality of the paper and the thoroughness of the rebuttal. Consequently, I have chosen to maintain the positive score. --- Reply to Comment 1.1.1: Title: Thanks for your reply Comment: Dear reviewer: Thank you for taking the time to respond. We greatly appreciate your endorsement of our work.
Summary: This paper studies the non-Markovian dynamics that often appear in physical systems and proposes a spatio-temporal E(3) equivariant graph network that moves beyond the simple frame-to-frame prediction task. The authors introduce an equivariant feature extraction method based on Fourier Transform, as well as separable equivariant spatial and temporal modules to process spatio-temporal information. They evaluate the proposed method on different benchmarks and vastly outperform frame-to-frame equivariant methods and non-equivariant spatio-temporal GNNs. Strengths: The paper studies the non-Markovian dynamics in physical systems, an often overlooked yet very important property. It is well-written and easy to follow. The novelties are clear, and the ablation studies support their usefulness. The quantitative results show a massive performance gain from incorporating equivariance and sequence dynamics. Weaknesses: The paper claims that "we are the first to use equivariant spatio-temporal graph models for physical dynamics simulation", yet it is missing out on a few related works, namely LoCS [1], and more recently, EqMotion [2]. Both works propose equivariant graph networks and focus on sequence-to-sequence prediction for physical systems. Hence, the authors should do a more thorough literature review and adjust their claims. Many of the neural network modules used in the proposed method are not adequately described in the manuscript, and their significance is not tested with an ablation study. For example, the learnable parameters $w_k$ are only briefly described, and their exact form, as well as their usefulness, are unclear. #### References [1] Kofinas, Miltiadis et al. Roto-translated Local Coordinate Frames for Interacting Dynamical Systems. NeurIPS 2021. [2] Xu, Chenxin et al. EqMotion: Equivariant Multi-agent Motion Prediction with Invariant Interaction Reasoning. CVPR 2023. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Following the weaknesses above, a comparison with other spatio-temporal equivariant graph networks would further enhance the credibility of the proposed method. 2. How important is the use of $w_k$, given that these features are further processed during message passing? 3. Since this work focuses on non-Markovian dynamics, an ablation study on the optimal number of past timesteps would be beneficial and insightful. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for your positive and constructive comments, and provide the answers to your questions below. >Q1: Following the weaknesses above, a comparison with other spatio-temporal equivariant graph networks would further enhance the credibility of the proposed method. Thank you very much for raising these two papers: LoCS and EqMotion, which will be definitely cited and discussed in Related Work. LoCS proposes two versions: the Markovian one and the non-Markovian one; the non-Markovian version is related to our method and it resorts to GRU units to record the memory of past frames, and predict the next frame conditional on the current frame in an auto-regressive manner, which can be regarded as the RNN style. On the contrary, our method resorts to the spatio-temporal setting and employs all past frames as the input to predict the target one, which can be regarded as the Transformer style. As for EqMotion, it first distills the input trajectory of each node into one multi-dimension vector, by which the spatio-temporal graph is compressed as one single spatial graph. By contrast, our method retains all input frames within both the spatial and temporal modules, such that it is able to better capture spatio-temporal correlations in a more elaborate way. Here, we additionally implement EqMotion on MD17 and Motion datasets for evaluation, since its code is more friendly to be adjusted for these two tasks. For fair comparisons, the input of EqMotion only contain node coordinates, as the same as our method and other baselines. We find that EqMotion performs much worse by directly predicting the absolute coordinates. We then modify EqMotion to predict the relative coordinates across two adjacent frames. Besides, we perform zero-mean normalization by subtracting all coordinate vectors of all nodes and all graphs with their mean, for further improvement of EqMotion on Motion dataset. The results are reported as follows, where the clear superiority of our ESTAG is still observed. |MD17 | ASPIRIN | BENZENE | ETHANOL | MALONALDEHYDE | NAPHTHALENE | SALICYLIC | TOLUENE | URACIL | |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |EqMotion | 0.721 | 0.156 | 0.476 | 0.600 | 0.747 | 0.697 | 0.691 | 0.681 | | ESTAG | 0.063 | 0.003 | 0.099 |0.101 | 0.068 | 0.047 | 0.079 | 0.066| | Motion | walk ($\times 10^{-1}$) | basketball ($\times 10^{-1}$) | |:---:|:---:|:---:| | EqMotion | 201.008 | 1362.900 | | EqMotion (zero-mean) | 1.011 | 4.893 | | ESTAG | 0.040 | 0.746| We will add the above discussions and adjust our claims accordingly. >Q2: How important is the use of $w_k$ given that these features are further processed during message passing? Sorry for the currently insufficient explanation. As mentioned Line 140-142, $w_k$ acts like spectral filters of the $k$-th frequency and enables us to select related frequency for the prediction. It is calculated as $\omega_k=f(\mathbf{h})$, where $w_k$ is a scalar ranging from 0 to 1, $f$ is implemented as an MLP plus a Sigmoid output, and $\mathbf{h}$ is the input feature. Here, we conduct an ablation study to evaluate the effect of $w_k$: |MD17 | ASPIRIN | BENZENE | ETHANOL | MALONALDEHYDE | NAPHTHALENE | SALICYLIC | TOLUENE | URACIL | |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | ESTAG |0.063 | 0.003 | 0.099 | 0.101 | 0.068|0.047 | 0.079 |0.066 | |w/o $w_k$ | 0.071 | 0.003 | 0.102 | 0.104 | 0.081 | 0.081 |0.079| 0.069 | In a general sense, the use of $w_k$ can further promote the performance, although the improvement is not so remarkable. We will include the above explanations into the revised paper. >Q3: Since this work focuses on non-Markovian dynamics, an ablation study on the optimal number of past timesteps would be beneficial and insightful. Thoughtful viewpoint! The number of past timesteps is indeed a significant factor we need to focus on and we did conduct an ablation study in Table 1 and Figure 3 in the supplementary material. We find that extending the number of past timesteps from 3 to 10 is able to generally improve the performance, which exhibits the necessity of non-Markovian modeling. We will highlight this point in the main paper. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer Gpx3 Comment: I would like to thank the authors for their rebuttal; they have addressed my questions and concerns. I am still positive towards this paper and I think its contribution is significant. I am keeping my score to 6. The authors should include the related works discussed during the rebuttal in the camera ready version, along with any experiments that compare against them. --- Reply to Comment 1.1.1: Title: Thanks for your feedback Comment: Dear reviewer, Thank you very much for approving our work. We will include the related works discussed during the rebuttal, along with the relevant experiments, in the camera-ready version.
Summary: This work proposes a novel architecture for predicting physical dynamics. This architecture first extracts the frequency feature of the input dynamics using a new technique. The frequency feature is then processed by spatial-temporal networks to generate predictions for the future dynamic. Through evaluating the novel architecture on multiple datasets ranging from molecular to macro levels, the authors show that their architecture outperform existing models significantly, especially on the molecular benchmark. The authors also present ablation studies to show which module is the most critical in yielding better performance. Strengths: The architecture proposed in this work (ESTAG) is novel. It is also shown to significantly outperform earlier works in this work on multiple datasets. The paper is well written and easy to read. The ablation studies conducted in this work also shows that the newly proposed frequency computation technique seems to be the most important module in the model. The result on the molecular dataset (MD17) is especially convincing that ESTAG is better than other models. Weaknesses: There are some details in the evaluation that are not clear. In the MD-17 dataset, the visualization seems to be really hard to differentiate the ESTAG model and the STEGNN model. But why is the MSE difference so big? Is the observed MSE difference in fact important for whatever downstream task that’s important? Or the STEGNN’s result is already good enough? The other two datasets (Protein and Motion datasets) show that ESTAG is still better than other models, but the gap is much smaller than the MD-17 dataset. It is unclear to me why ESTAG on MD-17 yields such a good result but not on others, can the authors provide an explanation on this? Moreover, on the motion dataset, only one person’s trajectory is used. Why only using one person’s trajectory but not all subjects’? Why visualizations on the other two datasets are not provided? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the weakness. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for your positive and constructive comments, and provide the answers to your questions below. >Q1:In the MD17 dataset, the visualization seems to be really hard to differentiate the ESTAG model and the STEGNN model. But why is the MSE difference so big? Is the observed MSE difference in fact important for whatever downstream task that’s important? Or the STEGNN’s result is already good enough? Sorry for the unclear visualization showing the difference between ESTAG and STEGNN in Figure 4. Actually, for the molecules (ASPIRIN, NAPHTHALENE, SALICYLIC) highlighted by red rectangles, the difference between ESTAG and STEGNN is obvious. For those nonobious cases, the unclarity is mainly caused by 2D printing that is hard to depict 3D conformations such as angle deviation. For better visualization, we choose the molecule URACIL which is not clearly visualized in the paper, and redisplay its 3D atom coordinates in Figure A4 (General Response). ESTAG yields more accurate 3D prediction than STEGNN, which is consistent with the MSE difference. We will add the new visualizations to the revised paper. >Q2: The other two datasets (Protein and Motion datasets) show that ESTAG is still better than other models, but the gap is much smaller than the MD17 dataset. It is unclear to me why ESTAG on MD17 yields such a good result but not on others, can the authors provide an explanation on this? Thank you for this nice observation. Here, we provide some potential explanations on why ESTAG on MD17 yields such a good result but not on the other two datasets: 1. On Protein dataset, the dynamics of a protein is much more complicated than those small molecules on MD17, owing to various kinds of physical interaction between different amino acids, let along each amino acid compose of a certain number of atoms. We conjecture that our ESTAG is still hard to reveal sufficiently accurate dynamical patterns for proteins, even though its performance is already better than other methods. 2. On Motion dataset, ESTAG performs much better than other methods except STGCN. We suspect that the simulation of the walking motion is not that challenging. When we follow the reviewer's suggestion in Q3 and additionally conduct evaluation on a more challenging and complicated task: the basketball motion, the gap between ESTAG and STGCN is significant (see the table below), which indicates the effectiveness of ESTAG in broad and practical cases. | Motion_basketball | MSE ($\times 10^{-1}$) | |:---:|:---:| | PT-s | 886.023 | | PT-m | 413.306 | | PT-t | 15.878 | | Baseline-s | 749.486 | | Baseline-m | 335.002 | | Baseline-t | 12.492 | | GNN | 15.336 | | EGNN | 13.199 | | TFN | 13.709 | | SE3 | 13.851 | | STGCN | 4.919 | | ESTAG | 0.746 | We will add the above explanastions to the revised paper. >Q3: Moreover, on the motion dataset, only one person’s trajectory is used. Why only using one person’s trajectory but not all subjects’? Thank you for this comment. Indeed, there are more than one trajectories in one subject. The reason why we only use one subject (#subject 35) for evaluation is following the setting of GMN [16] which is the initial work to explore equivariant dynamics simulation on this motion dataset. To better demonstrate the effectiveness of our method in more cases, we additionally carry out experiments on the basketball motion (#subject 102) which is more challenging to simulate. Notably, for basketball motion data, we focus on the trajectories whose length is greater than 170. The results are reported when answering Q2, where we can observe that our ESTAG outperforms other methods remarkably. Conducting evaluations on all subjects (the total number is 144) is expensive and could be better left for future exploraion. >Q4: Why visualizations on the other two datasets are not provided? We apologize for the missed visualization in the paper. Here, we redisplay the visualization on Protein and Motion Cature in Figure A1-A3 (General Response). It can be seen that ESTAG achieves better prediction accuracy.
Rebuttal 1: Rebuttal: ## General Response We sincerely thank all reviewers and ACs for their time and efforts on reviewing the paper. We are glad that the reviewers recognized the contributions of our paper, which we briefly summarize as follows. - **Novelty**. "The architecture proposed in this work (ESTAG) is novel" (Hnbu); "The novelties are clear" (Gpx3); "This approach is well-founded" (pTt6); "The idea presented in the paper is novel" "It fills a gap in the existing framework and the direction is practical and meaningful" (TWyG). - **Presentation**. "The paper is well-written and easy to follow" (iyxR); "The paper is well written and easy to read" (Hnbu); "It is well-written and easy to follow" (Gpx3);“The illustrations are very easy to follow” (TWyG). - **Experiment**. "The experiments cover three real-world datasets" (iyxR); "It is also shown to significantly outperform earlier works in this work on multiple datasets" (Hnbu); "The quantitative results show a massive performance gain from incorporating equivariance and sequence dynamics" (Gpx3); "Comprehensive experimental validation" (pTt6); “There are empirical improvements” (TWyG). We also appreciate the reviewers for their thoughtful comments and concerns, and provide additional visualizations and experment results in the attached PDF file for more details. We summarize the extra contents as follows. - **Figure A1 (to Reviewer Hnbu)** visualizes the difference between ESTAG and ST-EGNN on the walk subject on Motion dataset, showing that ESTAG ahieves clearly better prediction accuracy. - **Figure A2 (to Reviewer Hnbu)** visualizes the difference between ESTAG and ST-EGNN on the basketball subject on Motion dataset, again showing that ESTAG ahieves clearly better prediction accuracy. - **Figure A3 (to Reviewer Hnbu)** visualizes the difference between ESTAG and ST-EGNN on Protein dataset, where the protein predicted by ESTAG aligns closer to the ground truth while ST-EGNN fails to predict the alpha helix. - **Figure A4 (to Reviewer Hnbu)** visualizes the 3D conformation of the molecular URACIL on MD17, where our ESTAG yields closer prediction to the ground truth. - **Figure A5 (to Reviewer pTt6)** visualizes rollout MSE on MD17, verifying the effectiveness of ESTAG in the recurrent forecasting scenarios. Pdf: /pdf/435d645547d3b53b5341d453b40d02d556a9c39f.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper aims to simulate the physical dynamics with a spatio-temporal attentive graph network. The major contribution is to integrate the concept of spatio-temporal graph neural network with DFT to capture the data dependencies. The proposed model is evaluated on three datasets regarding the molecular-, protein- and macro-level prediciton. Strengths: 1. The paper is well-written and easy to follow. 2. The proposed model is technically solid. 3. The experiments cover three real-world datasets. Weaknesses: 1. My primary concern is technical novelty. This paper adapts STGNN for physical dynamic simulation without significant contributions or novel designs. The paper claims that it is important to relieve from the Markovian assumption, however, it has been widely explored in the literature on STGNNs. 2. The related work should be further investigated. As far as I know, there are many advanced STGNNs that can achieve much higher accuracy than STGCN (which was proposed in 2018). For more details, please refer to a recent survey [1]. 3. More powerful baselines should be discussed and considered as baselines [1]. This paper only compares ESTAN with STGCN in the line of STGNNs, which is not convincing enough. For example, comparing ESTAG with an existing STGNN published in 2022 is not investigated. Reference: [1] Jin, Guangyin, et al. "Spatio-temporal graph neural networks for predictive learning in urban computing: A survey." arXiv preprint arXiv:2303.14483 (2023). Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments! We provide the following responses to your concerns: >Q1: My primary concern is technical novelty. This paper adapts STGNN for physical dynamic simulation without significant contributions or novel designs. The paper claims that it is important to relieve from the Markovian assumption, however, it has been widely explored in the literature on STGNNs. The reviewer probably misunderstood our contributions. We strongly disagree that our paper adapts STGNN without significant contributions or novel designs. We provide the reasons below. 1. **Equivariance is not well explored in most STGNNs.** The main focus of our paper is on the task of 3D physical dynamics simulation, while most spatio-temporal GNNs discussed in the mentioned survey [A] are developed for uban computing. The unique challenge of physical dynamics simulation compared to uban computing is that, the model for physical dynamics simulation should obey E(3) equivariance: transforming the input coordinates by any translation/rotation/reflection will result in the output transformed in the same way. Most STGNNs, unfortunately, do not satify this crucial symmetry, as already pointed out in Line 50-52 in the paper. Exploring equivariant GNNs is currently a popular and challenging topic in machine learning [24, 16]. Our paper moves a step forward by investigating equivariant spatio-temporal GNNs, which exhibit particular difficulty and untriviality; for example, we should make both the spatial and temporal message passing equivariant. We understand that the non-Markovian property can be modeled in STGNNs, but our claim of relieving from the Markovian assumption is made in comparison with those equivariant GNNs without spatio-temporal modeling (such as EGNN). Overall, we propose an equivariant version of spatio-temporal GNNs, which can not only encode the non-Markovian property but also ensure E(3) equivariance for physical simulation. 2. **The proposed equivariant model is novel.** The entire architecture we design is novel, which consists of three equivariant modules: Equivariant Discrete Fourier Transform (EDFT), Equivariant Spatial Module (ESM), and Equivariant Temporal Module (ETM). Particularly, to the best of our knowledge, there is no previous attempt to develop equivariant DFT; in this paper, we achieve this by first translating the signals by the mean position and then adopt the same basis over the spatial dimension (Eq.3). The extracted invariant frequencies are then embedded into ESM to better leverage periodicity patterns. For ETM, the equivariant attention-based mechanism is also novel and carefully designed to ensure equivariance. >Q2: The related work should be further investigated. As far as I know, there are many advanced STGNNs that can achieve much higher accuracy than STGCN (which was proposed in 2018). For more details, please refer to a recent survey [A]. Thank you for your suggestion. We will cite the mentioned survey in Related Work. Indeed, besides STGCN [34], we did investigate ST-GCN [33], GaAN [35], and ASTGCN [13] in Line 85-95. We will discuss more advanced methods as suggested. It is worth mentioning that most spatio-temporal GNNs mentioned in [A] focus on urban computing, while our paper is concerned with the task of 3D physical dynamics simulation. >Q3: More powerful baselines should be discussed and considered as baselines [A]. This paper only compares ESTAG with STGCN in the line of STGNNs, which is not convincing enough. For example, comparing ESTAG with an existing STGNN published in 2022 is not investigated. Thank you for your suggestion. We have carefully read the survey you mentioned and choose AGL-STAN published in 2022 for comparison, since the encoder it used is transformer-based and competitive in performance. We find that AGL-STAN is originally for crime prediction, and it performs much worse for our task by directly predicting the absolute coordinates (i.e. AGL-STAN (abs)). We then modify AGL-STAN to predict the relative coordinates across two adjacent frames (i.e. AGL-STAN (rel)). The results become better and are tabulated below: |MD17|ASPIRIN|BENZENE|ETHANOL|MALONALDEHYDE|NAPHTHALENE|SALICYLIC|TOLUENE|URACIL| |-|-|-|-|-|-|-|-|-| |AGL_STAN (abs)|4.084|1.651|1.358|1.135|0.938|1.003|1.502|0.784| |AGL_STAN (rel)|0.719|0.106|0.459|0.596|0.601|0.452|0.683|0.515| | ESTAG | 0.063 | 0.003 |0.099 | 0.101 | 0.068 | 0.047 | 0.079 | 0.066 | |Protein|MSE| |-|-| |AGL_STAN(abs)|1.859| |AGL_STAN(rel)|1.671| | ESTAG |1.471 | |Motion|walk($\times 10^{-1}$)|basketball($\times 10^{-1}$)| |-|-|-| |AGL_STAN (abs)|1.675|189.082| |AGL_STAN (rel)|0.037|5.734| | ESTAG | 0.040 |0.746| We still observe that our ESTAG generally outperforms AGL-STAN. AGL-STAN (rel) performs slightly better than ESTAG on Motion-Walk, but for Motion-Basketball which is more complicated to simulate, ESTAG yields a much lower MSE than AGL-STAN (0.746 vs 5.734). Again, we highlight that AGL-STAN is not equivariant, and it could fail if we steer the input via E(3) transformation. The above discussions will be added into the revised paper to address the reviewer's concern. [A] Jin, Guangyin, et al. "Spatio-temporal graph neural networks for predictive learning in urban computing: A survey." --- Rebuttal 2: Title: Response Comment: Dear authors, Thank you for addressing my concerns. I also read the comments from other reviewers. Personally, I still have reservations regarding the technical novelty of the paper in the context of STGNNs. Since other concerns have been resolved, I would like to raise my recommendation score after consideration. Best regards, Reviewer --- Rebuttal Comment 2.1: Title: Thanks for your response Comment: Dear reviewer, we sincerely appreciate your recognition of our efforts. We will further clarify our contributions and the techinical novelty in the revised version.
null
null
null
null
null
null
Idempotent Learned Image Compression with Right-Inverse
Accept (poster)
Summary: The paper shows how to achieve a truly idempotent image compression method, where f(x) = f(f(x)). The paper shows that it's sufficient to have E(D(y)) == y. This only requires a surjective E. Strengths: Interesting derivation of the required conditions for idempotence. Also nice to see this problem studied. Weaknesses: - Old baselines: I think GDN is barely used in SOTA image codecs, where it's all ReLU or Leaky ReLU. It would have been interesting to see this method applied to eg. ELIC from 2022 vs. Balle's 2018 method. - The linear algebra presentation was hard to follow (p4). Some more intuitive insights like in the introduction would have been useful. Minor: Typo in section 3 header ("Invertibie") Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Only applied to old architectures from 2018 which contain 4 conv layers and GDN. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # To reviewer fD4t Thanks for your advices. We address your concerns as follows. **weakness-1 & limitation**: We test replacing GDN with residual blocks (as suggested by ELIC[1]), and report as follows (also shown in **[rebuttal fig.4]**): | framework | BD-rate (RB $\times$ 1 v.s. GDN) | BD-rate (RB $\times$ 3 v.s. GDN) | | -------- | -------- | -------- | | [idemp] proposed | -2.61 | -9.28 | | Balle2018 | -2.35 | -10.32 | | Minnen2018 | -1.36 | -8.97 | The results of Balle2018 and Minnen2018 are from Tab.4 in [1], e.g. for Balle2018 RB $\times$ 1 v.s. GDN the BD-rate is $[(100 + 5.68) / (100 + 8.23) - 1] \times 100 = -2.35$. It is clear that, replacing GDN with the more recent residual blocks would also improve our framework to the same degree. Thus our propsed framework is compatible with the recent advances in LIC. **weakness-2**: Thanks for your kind advice. We will make these parts more clearly presented. Specifically, we will first describe what does right-invertible linear operations look like, then we explain why null-space decomposition preserses the right-invertibility, and finally we introduce how we conduct null-space enhancement. [1] He, Dailan, et al. "Elic: Efficient learned image compression with unevenly grouped space-channel contextual adaptive coding." CVPR, 2022. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. I will stick with my accept rating.
Summary: The paper argues that invertibility is sufficient but not necessary for achieving idempotent codecs, and proposes a framework for achieving idempotent learned image compression (LIC) with right-inverse, which allows more flexible and expressive transforms. This paper details the expressive and right-reversible atomic transformations used in LIC, including convolution (using blocked rearrangement and null-space enhancement), normalization, and quantization. Strengths: 1、The paper first theoretically proves that idempotence can be relaxed to be right reversible and details the Right-Invertibie Codec. 2、The proposed framework achieves state-of-the-art RD performance among idempotent codecs. Also, it can be easily relaxed into a near-idempotent codec, which also achieves state-of-the-art re-compression performance. 3、The paper is well organized with detailed description of various parts of the whole right-reversible atomic transformations. Weaknesses: 1、Lack of analysis of the computational complexity of the proposed framework: The paper does not provide a detailed analysis of the computational complexity of the proposed framework. While the paper mentions that the proposed framework is efficient and parallel-friendly, it does not provide a detailed analysis of the computational cost of the various components of the framework. 2、The First-time compression RD performance of the proposed codec dramatically falls behind modern LIC. For most scenarios, the first compression is also important. It will be interesting for the author to provide an analysis on feasibility to further improve RD performance of the proposed framework. 3、Limited discussion of the limitations of the proposed framework: While the paper briefly discusses the limitations of the proposed framework. For example, the paper could discuss the potential impact of the assumption that the input image is preprocessed to have zero mean and unit variance on the performance of the framework. 4、Lack of comparison with non-idempotent codecs: The paper only compares the proposed framework with other idempotent and near-idempotent codecs. 5、Limited analysis of the impact of null-space enhancement: The paper only shows the impact of null-space enhancement on two lower-bpp points and two higher-bpp points, and does not provide a more comprehensive analysis of the impact of this technique on the performance of the framework. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1、Compared with the serial method, how much time complexity does the blocked rearrangement save, and will it have a huge impact on RD performance and re-compression performance? 2、Can the convolution, GDN, quantization, and other modules designed in the paper with the Right-Invertibie properties be applied to existing learned image compression frameworks? In that case, will the performance of the initial compression be dropped excessively? 3、The subjective encoding transform limits the expressiveness of the network, try other better mapping strategies? 4、Could the paper discuss the potential impact of the assumption that the input image is preprocessed to have zero mean and unit variance on the performance of the framework? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: 1. The proposed framework considers compression on integer latent, which is different from the common LIC method. 2. The assumption that the input image is preprocessed to have zero mean and unit variance may not hold for all images. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # To reviewer 4J8x Thanks for your advices. We address your concerns as follows. **weakness-1**: We provide the flops of various components of the proposed idempotent framework (for a $256 \times 256 \times 3$ input) as follows. | components | GFLOPs | | -------- | -------- | | blocked convolution | 0.98 | | null-space enhancement | 3.12 | | coupling enhancement | 3.19 | | right-invertible GDN | 0.24 | | hyperprior and others | 0.87 | | *overall* | 8.40 | **weakness-2**: To further improve the RD performance, we can first investigate the surjective mapping restriction. We currently concat multiple surjective mappings to form a big and complex encoding transform, in order to make this encoding transform right-invertible as a whole. However, this 'concat of surjective mappings' is sufficient but not necessary, and damages the expressiveness. It is possible to ensure the right-invertibility of encoding transform with other options, for example using multi-branch structures. Additionally, the building blocks such as GDN can be improved, for example changed to residual blocks. We use GDN here only to conduct a fair comparison against baseline Balle2018. We test replacing GDN with the more recent residual blocks and see similar improvement as reported in prior works, as is shown in **[rebuttal fig.4]**. **weakness-3 & question-4 & limitation-2**: We do not make this assumption. **weakness-4**: in Fig.3, we compare with the non-idempotent traditional codecs {JPEG2000, BPG444, VTM444} as well as learned codecs {Balle2017, Balle2018, Cheng2020}. We will make this more clear. **weakness-5**: We provide the BD-rate and BD-PSNR of various components of the framework, including null-space enhancement, as follows. | setting | BD-rate ($\downarrow$) | BD-PSNR ($\uparrow$) | | -------- | -------- | -------- | | proposed | 0 % | 0 | | w. GDN | 0.24% | -0.01 | | w. int mean | 4.11% | -0.20 | | w.o. c-en | 16.48% | -0.75 | | w.o. NE | 19.68% | -0.91 | | inv. | 56.03% | -2.31 | | w. conv1x1 | 60.65% | -2.51 | It is clear that these components are crucial to the proposed framework. **question-1**: To discuss this problem, consider a 2-dimension convolution with kernel size $K \times K$, stride $S \times S$, input spatial size $H \times W$, input channel $C_i$ and output channel $C_o$. Assuming the in-place parallel matrix multiplication[1] is used. For serial method, as is described in Line 104-109, we first substract the influence of the already-solved pixels, which takes $O(C_iK^2)$ time. Then we solve for the rest pixels, which takes $O(C_o)$ time. This substract-then-solve procedure needs to be done $HW/S^2$ times, so the overall complexity is $O(\frac{HW}{S^2}(C_iK^2+C_o))$. For parallel method implemented with the proposed blocked rearrangement, we can solve for all pixels with $O(C_o)$ time complexity. We can see that, compared with the parallel method, time complexity of serial method is forbiddenly higher and not viable for practical usages. In terms of re-compression performance, both serial method and parallel method are right-inverse, so idempotence can be achieved for both. In terms of RD performance, the serial method could be better because it does not suffer from block artifact. However, this issue has been addressed by the proposed coupling enhancement. **question-2**: The proposed idempotent frameworks is implemented based on the existing learned image compression framework Balle18[2], and the performance of the initial compression is not dropped excessively, as is shown in Fig.3 (a). **question-3**: To our best knowledge, surjective encoding transform is the only way to make encoding transform right-invertible, and this right-invertibility is the key to make the coding process idempotent. Relaxation of this surjection constraint while keeping right-invertibility would make a promising imporvement over this work. **limitation-1**: The latents can be made non-integer. For example, we can add {0.1, 0.2, ..., 0.9} as quantization level. In most LIC works the quantized latent is integer (e.g. Balle18[2]). [1] Randall, Keith H. (1998). Cilk: Efficient Multithreaded Computing (PDF) (Ph.D.). Massachusetts Institute of Technology. pp. 54–57 [2] Ballé, Johannes, et al. "Variational image compression with a scale hyperprior." International Conference on Learning Representations. 2018. --- Rebuttal Comment 1.1: Title: More comments Comment: Some concerns have been addressed by the authors rebuttal. Is it possible to provide some analysis on time complexity of the algorithm? And is it easy to apply the designed modules in this paper to other SOTA learning based codec in a plug-and-play way? --- Reply to Comment 1.1.1: Comment: > Is it possible to provide some analysis on time complexity of the algorithm? We extend Tab.1 and Tab.2 in the paper as follows, adding the actual 'encode-decode' time column as reference. **Table 1**: The BD-BR, BD-PSNR, FLOPs and encode-decode time of different methods on Kodak dataset. FLOPs and enc-dec time are calcualted on an input of shape 256 × 256 × 3. | Method | BD-BR($\%$)$\downarrow$ | BD-PSNR(dB)$\uparrow$ | GFLOPs$\downarrow$ | enc-dec time(ms) $\downarrow$ | | -------- | -------- | -------- | -------- | -------- | | JPEG2000 | 0.00 | 0.00 | - | - | | Helminger2021[1] | 4.83 | -0.21 | 15.89 | 185 | | Idempotent Proposed | **-28.75** | **1.63** | **8.40** | **110** | **Table 2**: PSNR drop during 50 re-compression of different non-idempotent and near idempotent codecs. FLOPs and encode-decode time tested under the same condition as Tab. 1 | Method | PSNR drop (dB) $\downarrow$, round=5 | round=10 | round=25 | round=50| GFLOPs $\downarrow$ | enc-dec time(ms) $\downarrow$ | | -------- | -------- | -------- | -------- | -------- | -------- | -------- | | (Non-Id) BPG | 1.16 | 1.93 | 2.10 | 2.19 | - | - | | (Non-Id) VTM | 1.19 | 2.09 | 4.50 | 7.18 | - | - | | (Non-Id) Balle18[2] | 2.18 | 3.17 | 5.65 | 8.46 | 6.23 | 80 | | (Non-Id) Cheng20[3] | 2.44 | 4.76 | 8.59 | 12.40 | 51.99 | >1000 | | -------- | -------- | -------- | -------- | -------- | -------- | -------- | | (Near-Id) Kim20[4] | **0.18** | **0.61** | 3.18 | 8.26 | **6.23** | **80** | | (Near-Id) Cai22[5] | 1.36 | 2.01 | 2.75 | - | 131.46 | 240 | | Near-Idempotent Proposed | *0.74* | *0.83* | **0.87** | **0.87** | *48.78* | *115* | It is clear that for both the idempotent and near-idempotent setting, the proposed framework achieves SOTA performance with comparable FLOPs and time complexity. > Is it easy to apply the designed modules in this paper to other SOTA learning based codec in a plug-and-play way? Yes, these modules can be applied in a plug-and-play way to many other SOTA architectures like [6]. - For blocked rearrangeed convolution, the only key point is that the receptive field is non-overlapping. Everything else is the same as normal convolution at inference time. - Null-space enhancement can be applied to trained linear tranforms (including convolution) without changing their weights. - Coupling enhancement is already plug-and-play. - Right-invertible activations can be improved from GDN-based to the SOTA residual-block-based [6], as is shown in **[rebuttal fig.4]**. - Right-invertible quantization can be used in replace for normal quantization in a plug-and-play way. However, it is not clear how to apply similar idea to transformer based architecture like [7] in an efficient way. Thanks for raising this point, we think this leaves a meaningful future research direction on how to make transformer based architecture idempotent. We will add those discussions in the final version to inspire more research. [1] L. Helminger, et al. "Lossy image compression with normalizing 353 flows." ICLRW 2021. [2] J. Ballé, et al. "Variational image compression with a scale hyperprior." ICLR, 2018. [3] Z. Cheng, et al. "Learned image compression with discretized gaussian mixture likelihoods and attention modules." CVPR, 2020. [4] J.-H. Kim, et al. "Instability of successive deep image compression." ACM MM, 2020 [5] S. Cai, et al. "High-fidelity variable-rate image compression via invertible activation transformation." ACM MM, 2022 [6] D. He, et al. "Elic: Efficient learned image compression with unevenly grouped space-channel contextual adaptive coding." CVPR, 2022. [7] Zhu, Yinhao, Yang Yang, and Taco Cohen. "Transformer-based transform coding." ICLR. 2021.
Summary: This paper introduces a learned image codec with a right-inverse transform. The task is to ensure that an image can be re-compressed multiple times without significant quality degradation, while the compression is still lossy. This work is one of the few early attempts along this line of research. Its applications are rather niche. Strengths: The paper is easy to follow and the task is novel. Weaknesses: (1) The main idea of null space decomposition has been proposed in J. Schwab, S. Antholzer, and M. Haltmeier, Deep null space learning for inverse problems: convergence analysis and rates. Inverse Problems, 35(2):025008, 2019. (2) The necessity and benefits of the blocked rearrangement convolution with coupling enhancement over the ordinary convolution are not clear. It seems to me that this part is not essential to null space decomposition. The authors argue that blocked rearrangement convolution can work more efficiently. However, there is no evidence or ablation study to support the argument. (3) The right inverse requires E o D to be an identity matrix. However, Eqs. (8) and (9) suggests D o E is an identify matrix. (4) Can the re-compression be done on different computation platforms while maintaining the right inverse property? (5) There is no ablation study on the near-idempotent idea. In Section 3.5, the authors mention that the near-idempotent codec has better first-time compression performance than the idempotent codec. Is there any insight into this? Also, how about the recompression performance as compared to the idempotent design? (6) The proposed method involves SVD. I wonder if this complicates the training process. (7) The authors should cite "https://github.com/mahaichuan/Versatile-Image-Compression" for invertible compression backbone design. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See my comments in the weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 1 poor Limitations: Please clarify whether the training would be complicated by the use of SVD. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # To reviewer H19p Thanks for your advices. We address your concerns as follows. **weakness-1**: Yes, and we have mentioned it at Line 43. We are the first to apply null-space decomposition to LIC task. More importantly, we propose several novel designs to enable efficient idempotent LIC as summarized in L39-47. **weakness-2**: Null-space decompostion is of use only when the right-inverse can be practically calculated. Ordinary convolutions have overlapping receptive fields, and this makes the calculation of its right-inverse forbidenly costy, as explained in line 104-109. The proposed blocked rearrangement convolution addresses this issue by making the receptive field non-overlapping. To be more specific, consider a 2-dimension convolution with kernel size $K \times K$, stride $S \times S$, input spatial size $H \times W$, input channel $C_i$ and output channel $C_o$. Assuming the in-place parallel matrix multiplication[1] is used. For serial method, we first substract the influence of the already-solved pixels, which takes $O(C_iK^2)$ time. Then we solve for the rest pixels, which takes $O(C_o)$ time. This substract-then-solve procedure needs to be done $HW/S^2$ times, so the overall complexity is $O(\frac{HW}{S^2}(C_iK^2+C_o))$. For parallel method implemented with the proposed blocked rearrangement, we can solve for all pixels with $O(C_o)$ time complexity. We can see that, compared with the parallel method, time complexity of serial method is forbiddenly higher and not viable for practical usages. On the other hand, non-overlapping receptive field causes visual artifact, which can be effectively removed by the proposed coupling enhancement, as is shown in Fig.4(b) and explained in line 267-274. Together, blocked rearrangement convolution with coupling enhancement provides computable right-inverse without visual artifact. **weakness-3**: We beg to differ. On the one hand, Eq. (8-9) only describe how we compute the right inverse of convolution. But $E$ and $D$ are NOT just convolutions. On the other hand, even for convolutions, a right-invertible convolution only requires $K$ to be full rank in its column, and this does NOT necessarily make $KK^+$ an identity matrix. Eq. (8) follows the convention of GEMM and writes $K$ on the right of $X$, which does not affect the order of $E$ and $D$. **weakness-4**: The idempotence is theoretically assured by right-inverse, thus cross-platform compatibility can be achieved. **weakness-5**: The insight is that near-idempotent codec does not need the right-invertibility of the encoding transform, and thus has wider choices or better model capacity. We provide the ablation study between proposed idempotent and near-idempotent frameworks, as is shown in **[rebuttal fig.3]**. It is clear that near-idempotent framework has much better first-time compression RD performance than idempotent framework. In terms of recompression RD performance, however, near-idempotent can only reach similar RD performance with much higher computation cost (8.40 GFLOPs v.s. 48.78 GFLOPs in Tab.1 and Tab.2 ). **weakness-6 & limitation**: Yes, it slows down the training speed for 2-3 times. But please note that in practical usages only inference stage is used. And inference stage is not affected because we only need the matrix, and SVD is not needed when doing inference. **weakness-7**: Thanks for the kind advice, and we will include this in the reference. [1] Randall, Keith H. (1998). Cilk: Efficient Multithreaded Computing (PDF) (Ph.D.). Massachusetts Institute of Technology. pp. 54–57 [2] M. Lezcano-Casado. Trivializations for gradient-based optimization on manifolds. NeurIPS, 2019 --- Rebuttal Comment 1.1: Title: About weakness-2 and -4 Comment: I thank the authors for putting much effort into addressing most of my comments. (1) About the weakness-2, the authors introduce the coupling enhancement layers to increase the receptive field of the blocked rearrangement convolution. Although the blocked rearrangement convolution is parallel friendly with lower time complexity, I wonder whether the multiply-accumulate operations (blocked rearrangement convolution + coupling enhancement layers vs. ordinary convolution) may be increased in the end. (2) About the weakness-4: it is widely known that different machines process floating-point arithmetic differently. Although it is argued that the right inverse is theoretically assured, how the floating-point precision issue across machines can be practically addressed in the algorithm appears to me still an issue and may impact the practicality of the proposed method. --- Reply to Comment 1.1.1: Title: Response to 'About weakness-2 and -4' Comment: We thank the reviewer for thoughtful comments and feedback on our work. **(1)** The MAC of *blocked rearrangement convolution + coupling enhancement* is about 2.79 times that of *ordinary convolution*. However, actual running time is not only about MAC but also about parallelism, as is explained in the rebuttal, which makes *blocked rearrangement convolution + coupling enhancement* a lot more faster than *ordinary convolution* for computing right-inverse. **(2)** On the one hand, perfect cross-platform consistency is hardly considered in LIC, and may prove a difficult problem itself. As is shown in [1], to perfectly avoid the floating point issue, the whole encoding-decoding process has to be carried out with only integers, and rounding after floating point calculation is not enough. Therefore, the prevalent integer flow based invertible transforms such as Helminger2021 and [2-5] may all suffer from the floating point issue. On the other hand, pure integer-integer transform can also be made surjective and thus allows for perfect right-inverse. Consider the following toy example | input | output | | -------- | -------- | | $0$ or $1$ or $2$ | $0$ | | $3$ or $4$ | $1$ | | $5$ | $2$ | | $6$ | $3$ | When we get $0$ as ouput, we can right-inverse it as $0$, $1$ or $2$ and it will still be transformed to $0$. This is a perfect right-inverse, and how to choose among $0$, $1$ or $2$ can be seen as null-space enhancement. More generally, a all-integer linear transform $Ax=y$, $s.t. A \in \mathbb{Z}^{d \times D}, x \in \mathbb{Z}^D, y \in \mathbb{Z}^d$ can be perfectly right-inversed as long as $A$ is a surjection from $\mathbb{Z}^D$ to $\mathbb{Z}^d$, though in this case much more need to be considered than mere rank of $A$ [6]. To sum up, there is no contradiction between our work and cross-platform consistency issue. Thanks for pointing out a very interesting problem, which is a good future research direction. However, this issue is normally considered separately (i.e. by dedicated works like [1]) and is out of the scope of our work as well as previous works such as [2-5]. [1] Ballé, Johannes, et al. "Integer networks for data compression with latent-variable models.", ICLR, 2018. [2] Hoogeboom, Emiel, et al. "Integer discrete flows and lossless compression.", NeurIPS, 2019. [3] Berg, Rianne van den, et al. "Idf++: Analyzing and improving integer discrete flows for lossless compression.", ICLR, 2020. [4] Zhang, Shifeng, et al. "ivpf: Numerical invertible volume preserving flow for efficient lossless compression", CVPR, 2021. [5] Ma, Haichuan, et al. "End-to-end optimized versatile image compression with wavelet-like transform." TPAMI, 2020 [6] https://en.wikipedia.org/wiki/Diophantine_equation#System_of_linear_Diophantine_equations
Summary: In this work the authors address the problem of stability of codec re-compression (idempotence). In particular the paper points out the difficulty of achieving idempotence in learned image compression, and existing methods rely on invertible models such as normalizing flows which limits their performance. The observation is that the only constraint is to have right-invertible transforms. Blocked convolutions are proposed, such that the right inverse matrix corresponding to the encoder convolution is used on the decoder side. Besides this main point, the paper also presents additional improvements to: 1) limit the effects of the block pattern in the receptive field, 2) extend the commonly used GDN layers, and 3) address the issue with mean-shift trick quantization that doesn’t guarantee idempotence. Strengths: In general the paper is well written and clearly motivated. There are several key contributions: + Idea of using right inverse and its implementation with the blocked convolution and right matrix inverse. + Addressing all remaining details in a sound way: GDN layer, issue with the mean-shift trick in the mean-scale entropy model It is also important to mention that has: + State of the art results for idempotent image compression + An ablation study that shows the importance of each contribution + Code provided along with the submission. Weaknesses: In general I think the paper is doing a great job in presenting the problem and the solution I would have minor points that authors should address in the rebuttal: 1. I think the transition from function notation to matrix notation is not smooth, and in general the transition from idempotence to right inverse could be better explained for readers less familiar with the topic of learned image compression. 2. The word “Rearrangement” together with the kernel size 5 example in line 94 brings a lot of confusion. It took me a while to realize that the paper was proposing a new convolution and receptive fields are not overlapping which is needed to make everything work. 3. Too much space is used for the Blocked convolution while the description is not detailed enough for the rest. For example what is f(Y) exactly (eq. 11 and 12). How is it learned? 4. The extension to Near-Idempotence is a bit rushed and it’s not clear of considering the modification applied to other layers might be better? 5. In the experiments, It should always be mentioned “idempotent” or “near-idempotent” to avoid any doubt. 6. For reference, best performing non idempotent model should appear in Figure 3.a 7. For sanity check, both the idempotent version and [Helminger2021] should appear in Figure 3.b Other details :\ l. 74: section title “Right-Invertibie” -> “Right-Invertible’\ l. 94: missing word in “if exists”? Technical Quality: 3 good Clarity: 3 good Questions for Authors: My questions are in the weaknesses (priority must be given to questions 3 and 7) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors discuss limitations in a sufficient manner. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # To reviewer FGoX Thanks for your advice. We address your concerns as follows. **weakness-1**:Thanks for the kind advice. We will improve Sec.2 and Sec.3.1 in order to give a better presentation for these two points. Specifically, to better explain the transition from idempotence to right inverse for readers less familiar with LIC, in Sec.2, we will first introduce the general form for LIC, that is $y = Q \circ E(x)$ and $\hat{x}=D(y)$ ($\hat{x}$ is the reconstruction and other notations are accordingly explained). Then, we introduce the re-compression setting, which is formalized by Eq. 2. After that, we explain the idempotence requirement, which requires $E \circ D$ to be canceled. From this requirement, we give the usual solution of inverse, and finally the proposed solution of right-inverse. To make transition from function notation to matrix notation smoother, in Sec.3.1, we will first give higher level description for right-inverse of convolutions using function notation. Then, we explicitly give the shape of every variable in the function notation, for readers to get better understand of the calculation of right-inverse. Finally, we give the matrix notation, and re-emphasize the shapes so that the readers can better link matrix notation with function notation. **weakness-2**: Sorry for the unclearness. We will make this more clear by focusing on why overlapping receptive field is problematic and how do we overcome this. **weakness-3**: We will rebalance different parts and explain more on the null-space enhancement and near-idempotent setting. For $f(Y)$: $f(Y)$ is the implementation of $F$, and $F$ is the arbitrarily chosen variable in eq. 10. We choose to implement $F$ in the form of $f(Y)$ so that it is adapted to each different $Y$. Specifically, $f(\cdot)$ is learned with a common residual block, and we will include this in the experiment section. **weakness-4**: We provide additional ablation study, as follows, that tests what happens if the modification to other layers is not applied. Specifically, we test keeping the GDN layers unchanged (*w. gdn*) and keeping the convolution layers unchanged (*w. conv*). Keeping both the GDN layers and the convolution layers would reduce to baseline Balle2018. As is shown in **[rebuttal fig.1]**, keeping more layers unchanged may slightly improve first-time compression performance, but is evidently harmful to re-compression performance. The proposed near-idempotent framework has the best re-compression performance among these settings. Extra ablation study on whether changing each layer is very interesting, we are waiting for more results due to limited computational resources. **weakness-5 & weakness-6**: Thanks for your kind advices. We will modify these two points. **weakness-7**: We add idempotent codecs in Fig.3(b) for sanity check, which is shown in **[rebuttal fig.2]**. Please note that idempotent codecs are straight lines and cover each other in the figure. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the rebuttal and the additional figures. My concerns have been addressed.
Rebuttal 1: Rebuttal: To all reviewers: please kindly reach to the pdf file below for **[rebuttal fig.x]**. Pdf: /pdf/1c6f3fe8b2a7eddbe912e259fecd1700837dd5da.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Enemy is Inside: Alleviating VAE's Overestimation in Unsupervised OOD Detection
Reject
Summary: The paper first mathematically examines the unsupervised (without training label) OOD detection performance using VAE, decomposing the expected ELBO into two components: (i) entropy $\mathcal{H}(x)$ of a dataset, (ii) KL divergence $D_{KL}(q(z)||p(z))$ between the estimated $z$ and the prior. It's theoretically shown that the entropy of data distribution is defined by itself, thus may not bring benefit to the OOD detection problem (Eq. 8). Then the paper mathematical and empirically analyzes the second component. The paper shows in some simple cases the prior $p(z)$ and the dataset $p(x)$ can not fit well with the VAE model, which results in for some $x$, $p_\theta(x)$ estimated by the trained VAE model has high value when $p(x)$ has low value, which is the overestimation problem for OOD detection. The paper proposes to use post-hoc prior method (estimate the prior from the trained VAE and the ID dataset) to revise the issue of the improper design of prior and add calibration to alleviate the issue of entropy. Empirical results show that the proposed AVOID method constantly improves the OOD detection performance simply based on ELBO (Table 3), and outperforms existing unsupervised non-ensemble OOD detection methods. Strengths: 1. The paper mathematically examines the unsupervised (without training label) OOD detection performance using VAE, decomposing the expected ELBO into two components: (i) entropy $\mathcal{H}(x)$ of a dataset, (ii) KL divergence $D_{KL}(q(z)||p(z))$ between the estimated $p(z)$ and the prior, which is quite crucial to understand the underline benefit and drawback to use ELBO as OOD score. 2. The demos including Figure 2,3,4 shows the improper of the traditionally chosen prior leading to the mismatch between prior and post-hoc prior, and the high probability of OOD sample in the prior. The observation well inspires the proposed post-hoc prior method. 3. Experiment includes varied OOD detection methods including supervised, auxiliary, and unsupervised (ensemble/non-ensemble), and shows the proposed method beats baselines within a specific category. Weaknesses: 1. Notation is not consistent such as $p$ and $p_\theta$ in Figure 3. 2. Eq. 8 uses the entropy difference between ID and OOD distributions. Eq. 8 tells us the more diverse the ID distribution, the harder the OOD detection task. I think the OOD here should consider overall OOD distribution instead of an OOD dataset distribution. If not, I can simply define each OOD data point as a distribution which has $\mathcal{H}_{p_o}=1$ or I consider overall OOD data together (overall OOD distribution) which may have a pretty large diversity and very low entropy. Thus the motivation for the second method is not well held. I believe the idea of the second method is good itself, it leverages some extra information to improve the OOD performance. 3. Sec. 3.1 uses 3-layer NN for $q_\phi$ and $p_\theta$. The dataset is synthetic, thus I wonder whether increasing the number of training samples and NN capability would help better estimate $p_\theta(x)$. In other words, the reason that ELBO suffers from overestimation is the number of training samples, NN capability, or something else. Or perhaps the observation from Figure 3 is even when ID is well estimated, the OOD is still not well estimated. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: (1) I would like to ask for thoughts on Eq. (8), what's the right definition for $\mathcal{H}_{p_o}$? I feel the choices include: (i) regards only 1 sample as $p_o$, (ii) regards some OOD samples as $p_o$, (iii) regards overall OOD samples as $p_o$. I think (iii) is the best choice though it's not feasible, and (i) and (ii) are not the correct choice. The DEC itself is an OOD detection method and can be ensembled into any OOD detection method, thus I feel it's less related to factor two, the entropy issue. (2) Why in Table 2 PHP performs better than DEC on the left table and perform worse on the right table? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **For weakness 1:** Thanks for your careful reading and we will correct the inconsistent notation. **For weakness 2 & question 1:** We pretty much appreciate this thoughtful comment. In the beginning, please allow us to point out a factual error in your comments that the entropy of a dataset will become larger with the increase of the diversity (not "large diversity and very low entropy''), but it does not matter us to understand the high-level idea of your comments. For the definition of the data distribution $p_{id}(x)$ and $p_{ood}(x)$, it is reasonable to define the $p_{id}(x)$ as an underlying distribution where the observed training set $\\{x_i\\}^N_{i=1}$ is sampled i.i.d. N times from, but the definition of $p_{ood}(x)$ could be empirical and diverse. From the perspective of DEC method, given a testing sample $x_{ood}^j$, its corresponding $p_{ood}$ should be defined as a distribution where most samples with higher probabilities for generation from it are **semantically** (like category) and **statistically** (like image complexity) similar to $x_{ood}^j$. Then, it requires us to estimate $\mathcal{H}\_{ood}$ defined on $p_{ood}$ with the given testing sample $x_{ood}$. If we have sufficient OOD data points sampled from $x_{ood}^j$'s corresponding $p_{ood}$, we can provide a more precise estimation for $\mathcal{H}\_{p_{ood}}$ though it is not feasible in practice. We also agree with your point that the ground truth of $\mathcal{H}\_{p_{ood}}$ would be a const, no matter 1 for uniform distribution or 0 for impulse distribution, but you can never know the amount of OOD samples during testing, whose challenge relies on requiring us to use only one OOD data sample to estimate $\mathcal{H}\_{p_{ood}}$ on the whole OOD dataset. Although it is hard to get the ground truth $\mathcal{H}\_{p_{ood}}(x)$, we provide an approximating method for it with DEC method. There are two intuitions: 1) the images in a dataset have similar complexity (complexity is related to the low-level feature like texture, e.g., CIFAR-10 has more complexity than MNIST), where the complexity could be evaluated by image compression methods like SVD, i.e., a more complex image needs more bits (like more singular values) to compress in order to reconstruct the image to a certain degree of the reconstruction error. Figure 7 could empirically support this intuition; 2) a dataset containing images with higher image complexity should have higher entropy than the simpler ones, e.g., CIFAR-10 (categories including cat, dog, horse, etc.) is much more complex than the SVHN dataset (House numbers), which indicates the underlying $p_{cifar}(x)$ is much more diverse than $p_{svhn}(x)$ and leads to higher entropy. Finally, the DEC method scales the image complexity $\mathcal{C}_{non}(x)$ to $\mathcal{C}(x)$, which could have a similar scale of the entropy $\mathcal{H}(x)$. **For weakness 3:** Increasing the model capacity has been proved to be useless in previous works [17, 18], where the overestimation issue still exists. Considering the number of training samples could be interesting, thus we add additional experiments in **Table 3 and Figure 1 \(a-b\) of the one-page rebuttal pdf** to evaluate the influence of the number of training samples and model capacity (number of nn layers) in the synthetic and practical datasets. The conclusion is that influence of number of the training samples in alleviating overestimation is very limited and increasing the model capacity could not bring significant improvement. **For question 2:** The performance of PHP and DEC in improving OOD detection is independent, since they are focusing on totally different factors. When the $q_{id}(z)$ is very distinguishable to $q_{ood}(z)$, PHP method could perform better. When the dataset entropy (or image complexity) is extremely huge between ID and OOD datasets, the DEC method could perform better. --- Rebuttal Comment 1.1: Title: Response to rebuttal. Comment: Thank you for your rebuttal! Sorry, it's my mistake, a dataset with high diversity corresponds to high entropy. Thank you for your further explanation. How complexity and entropy got connected is still confusing me. Let me discuss by the definitions of the terms: (a) entropy of $x$ of mnist dataset $x \sim D_x^{mnist}$, $p^{mnist}(x)$: $\int -p^{mnist}(x) log(p^{mnist}(x)) dx$ (b) the density of $x$ in the overall OOD dataset $x \sim D_x^{overall}$: $p^{overall}(x)$ (c) complexity of $x$: approximate via image compress method. Then let me share what I think about the connection between (a), (b), and (c): (1) "given a testing sample $x_{ood}^j$, its corresponding $p_{ood}$ should be defined as a distribution $D_x$ where most samples with higher probabilities for generation from it are semantically (like category) and statistically (like image complexity) similar to $x_j$". It's a vague definition and It's weird to define $x$ first and then define the $D_x$. We should have $D_x$ first, then $x_{ood}^j \in D_x$ could be on the boundary of $D_x$. (2) "a dataset containing images with higher image complexity should have higher entropy than the simpler ones." It also depends on how many classes are in the dataset. I can combine two different cifar10 to make a dataset cifar20 with bigger entropy than cifar10, ie, images in cifar20 have similar complexity as cifar10, but cifar20 has a higher entropy. I believe the higher complexity implies a lower density (I'm connecting (c) and (b)). The quoted sentence in (2) is not aligned with (1), (2) is discussing a dataset while (1) is for one sample. I believe this is caused by some notation/terminology confusion. (3) If (c) is connected to (b) via higher complexity of $x$ implying a lower $p^{overall}(x)$, (this is actually related to the quoted sentence in (1)), then how is it connected with (a), a dataset entropy in equation (8)? --- Reply to Comment 1.1.1: Title: The DEC method is NOT estimating dataset entropy by image complexity Comment: Thank you for affording us a second opportunity to engage in further discussion about our paper! Firstly, we apologize for not recognizing the significant misunderstanding that may have arisen due to the unclear explanations in our initial response. We intend to address this confusion and start with your comments regarding the relationship between image complexity and dataset entropy. * (1) "It's a vague definition ..." Thanks for your correction, we agree with your point that we should have $D_x$ first, and then we can define a data sample $x^j \in D_x$. Let's make it clearer based on your definitions. i) an ID dataset $D_{id}$ is sampled from $p_{id}(x)$, and each ID sample satisfies $x_{id}^j \in D_{id}$; ii) an OOD dataset $D_{ood}$ is sampled from $p_{ood}(x)$, and each OOD sample satisfies $x_{ood}^j \in D_{ood}$; We hope to reach an agreement on the definition of $D_{ood}$. As stated in your first comment, we agree with the point that you can define $D_{ood}$ as no matter a single data sample, a set of limited data samples, or unlimited data samples. The ultimate use of these datasets $D_{ood}$ with various data sizes is still the same, which is to be used to estimate some metrics conditioned on $p_{ood}(x)$, like $\mathbb{E}\_{p}[ELBO(x)]\approx\mathbb{E}\_{D_x}[ELBO(x)]$. Moreover, it doesn't affect the analysis in our paper that a higher $\mathcal{H}\_{p_{id}}$ and $D_{KL}[q_{id}(z)||p(z)]$ would cause the gap $\mathcal{G}$ in Eq. (8) to be smaller, i.e., easier for an overestimation issue to occur. * (2) "I believe the higher complexity implies a lower density..." (3) "how is it connected with (a)..." Thanks for your vivid example ("cifar20") to demonstrate the relationship between image complexity and dataset entropy. Actually, we are still not sure if the conclusion "the higher complexity implies a lower density" is correct, because the image complexity in your example won't change with the dataset entropy. Thus, we tend to believe there is no direct relationship between a single image's complexity and a dataset's entropy. [Below is also a response to **Reviewer LEk9**] However, no matter what the relationship between image complexity and dataset entropy is, it won't affect the effectiveness of our DEC method. Because **we don't use image complexity to estimate dataset entropy** in the original paper (Sorry for our last non-rigorous response). The core idea of DEC is to add a calibration term $\mathcal{C}(x)$ to $ELBO(x)$, where $\mathcal{C}(x)$ should satisfy two properties: 1) $\mathbb{E}\_{p_{id}}[\mathcal{C}(x)] > \mathbb{E}\_{p_{ood}}[\mathcal{C}(x)]$ to make $\mathcal{G}$ in Eq. (8) to be larger to alleviate the overestimation of ELBO-based OOD detection methods (**no dataset entropy needs to be estimated here**); 2) $\mathbb{E}\_{p_{id}}[\mathcal{C}(x)]$ should have a similar scale of $\mathcal{H}\_{p_{id}}(x)$ to ensure the effectiveness of AVOID method as analyzed in line 237~246 (**$\mathcal{H}\_{p_{id}}(x)$ is estimated by $\mathcal{H}\_{p_{id}}(x) \approx \mathbb{E}\_{x\sim p_{id}}[PHP(x)]$ instead of image complexity**). For property 1, what we need is an effective method that could roughly discriminate the ID/OOD data (An ideal choice is a score function that assigns ID data "1" and OOD data "0"). Since OOD data samples may occur with semantic and/or statistical differences from ID ones and our previous PHP method has already focused on the semantic aspect, we hope the property 1 of the DEC method can be achieved from the perspective of **statistical difference**, which naturally leads to the score functions based on sample-level statistics. Image complexity is one of the choices and it has been proven to be effective by our experiments. As the definition of $\mathcal{C}(x)$ shown in Eq. (20) of our paper, $\mathbb{E}\_{p_{id}}[\mathcal{C}(x)] > \mathbb{E}\_{p_{ood}}[\mathcal{C}(x)]$ will constantly satisfy as long as the image complexity of OOD and ID samples is different. For property 2, we develop a scaled version of $\mathcal{C}(x)$ by introducing a task-adaptive scale factor $\mathcal{H}\_{p_{id}}(x)$, which is **NOT** estimated by image complexity but by $\mathcal{H}\_{p_{id}}(x)\approx \mathbb{E}\_{x\sim p_{id}}[PHP(x)]$. We admit that the DEC method based on SVD is not that perfect, because when the OOD data image complexity is similar to ID data, the SVD-based DEC method could "fail" (the contribution to alleviating overestimation is limited but it would **not** cause any harm) though AVOID method could still rely on the PHP method to alleviate overestimation. However, we note that the most important contribution of this work is firstly identifying two factors that cause the overestimation issue of VAE, and we also believe that there will be other more effective score functions to be discovered based on our findings, which can further improve the performance. --- Reply to Comment 1.1.2: Comment: Dear reviewer GLsZ, Thanks again for engaging in further discussion with us. We sincerely hope that our response has helped address your confusion regarding image complexity and dataset entropy. After clearing up the misunderstanding, please allow us to kindly request your valuable time to review the mechanism of our method and rejudge its contribution to the field of unsupervised OOD detection. Your thoughtful evaluation would be greatly appreciated. Best regards, Authors
Summary: The paper discusses the phenomenon of overestimation, the allocation of higher likelihoods to out-of-distribution data points, in deep generative models. It analyses two factors which may cause the overestimation problem specific to VAEs from a reformulation of the ELBO. These two factors are posterior collapse and a difference in entropies between in-distribution and out-of-distribution datasets. The paper proposes, again specific to VAEs, a method called AVOID for alleviating these two factors. Strengths: * The clear definition of overestimation in Eq. (3) following is useful, also for the wider literature. * The experimental setup is large: Table 3 demonstrates that the number of dataset combinations considered is numerous, in particular in comparison to previous work. Weaknesses: * In 3.2, the authors pose the questions “When is the design of the prior proper/not proper”, but answer these questions by providing an example for each case. While this is useful for illustrative purposes, it does not answer the stated question. The first few examples are furthermore focussing on linear VAEs which are not relevant for common practical uses, which limits the relevance of the theoretical results in this section. * The design of the calibration term in ll. 219 is unclear. In my opinion, it is not properly explained, and important choices like SVD are not well motivated. When would SVD likely fail? Why does SVD intuitively capture the difference in entropy between the datasets? The words “complexity” and “entropy” seem to be used interchangeably, please explain or use consistently. * The experimental results are difficult to interpret, it is partly not possible to draw meaningful insights from it. It is worth noting that this is common in similar works on alleviating OOD detection in DGMs, the methods are hard to compare due to different experimental setups. However, in this work, important questions I have are: 1) In Table 1, in the unsupervised column, why is AVOID highlighted in bold, even though WAIC outperforms it sometimes? 2) Where is the performance of a standard VAE without any adaptations listed? I find this an important benchmark. 3) What is the decision criterion for OOD vs. in-distribution? Is it a threshold on the amended likelihood? If yes, looking at the density plots of Fig. 6 (b), how is it possible that there is still a lot of overlap between the two datasets in PHP, even though the accuracy according to Table 1 is 99.2%? This seems inconsistent to me. 4) The experimental results report no standard deviations in key tables, such as Table 1 and 2 . DGM based methods are well known to be unstable, hence standard deviations would be useful. However, Table 3 partly alleviates this problem due to the large number of dataset combinations considered. 5) Table 3: I would argue that comparing CIFAR10 and CIFAR100 (and possibly other combinations) seems meaningless: The datasets are overlapping, hence it is unclear what is OOD and what is in-distribution. * The language is sometimes unclear, in general slightly hard to understand, and could be greatly improved. In summary, while this work demonstrates a large effort and a clear analytical approach to alleviating the overestimation problem in VAEs, important questions remain unclear. I am open to reconsidering my score upon a response from the authors. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: * ll. 191: “The visualization reveals that p(z) cannot distinguish between the latent variables sampled from qid(z) and qood(z), while qid(z) is 193 clearly distinguishable from qood(z).” I don’t understand this sentence, could you please explain it? What do you mean by “the prior cannot distinguish latent samples”? * In Eq. (16), the proposal seems to be to add a regularizer which accounts for calibration of differing entropies to the ELBO. Why is this a good choice? Should it be weighted with the remainder of the objective? * The proposal for alleviating factor 1 is learning a more complicated, LSTM-based prior. How would a VAE which is end-to-end trained with this more complicated prior perform? Would any VAE with a more complicated prior (of which there are plenty) help alleviate the overestimation problem? * The authors could consider discussing and benchmarking against this recent work, which provides an orthogonal, score-based approach to the problem of anomaly detection in DGMs: https://openreview.net/forum?id=deYF9kVmIX Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: * The overestimation problem is common to many DGM methods, but this work provides a solution for VAEs only. The scope is bigger, and one could argue that it might be more interesting to find the underlying root cause in all DGM methods which suffer from this problem (if there is one). Yet, considering VAEs is a very interesting start. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **For weakness 1:** Thanks for this helpful suggestion. As shown in Lines 180~183 in our paper, we have summarized the answer to the stated question after enumerating several typical cases. Then we highlight that the analyzed case in Fig.3 is targeted at non-linear VAEs (note the activation function), and the conclusion can be flexibly extended for other non-linear VAEs with various network structures, such as the convolutional VAEs used in our experiments. To make the analysis more relevant to the common practical cases, we add an additional related experiment in **Table 3 and Figure 1 \(a & b\) of the "one-page rebuttal pdf"**. We testify to the influence of dataset size and model capacity for the ELBO's OOD detection performance in the 2D case and practical image dataset pairs. We found that the prior is still improper and the overestimation issue still happens. We would also add some discussions about similar empirical findings in previous papers like distributional matching [1], which has been included in our paper, and paper [2] to make it more relevant to practical cases. [1] Mihaela R., et al.. Distribution matching in variational inference. [2] Dai, B., et al.. Diagnosing and Enhancing VAE Models. **For weakness 2:** The design of the calibration term is to mitigate the entropy gap between ID and OOD datasets, as shown in Eq.8. The reason why we use image complexity (evaluated by image compression methods like SVD) to approximate the dataset entropy is inspired by the previous work [13]. The high-level idea is that an existing dataset, such as MNIST, always contains similar data points (similar content and texture complexity), which leads to similar image complexity, and datasets with more complex texture should have greater entropies, which is in line with the image complexity, e.g., FashionMNIST's entropy is reasonable to be greater than MNIST and the FashionMNIST's image complexity is shown to be higher as shown in Figure 5. However, since image complexity is not equal to entropy in scale, we propose the scaled version to scale the image complexity to approximate dataset entropy in Line 246 and Appendix D. SVD would fail when the datasets are similar in image complexity like CIFAR-10(ID)/CIFAR-100(OOD) compared with other dataset pairs as shown in Appendix G. We will add the explanation of "image complexity" and use it consistently. **For weakness 3**: Sorry for the misunderstanding. 1) we don't bold "WAIC" because we have claimed in Table 1 that "best results achieved by the methods of the category “Not ensembles” of “Unsupervised” have been bold". 2) "performance of a standard VAE" is the performance of the "ELBO [25]" in all tables as stated in line 271 "compare our method with a standard VAE [25]". We will make it more clear in the main paper. 3) The density plot of Fig. 6 (b) is the ablation study's visualization of PHP method (its auroc is 89.7%) instead of the AVOID (its auroc is 99.2%). AUROC is a commonly used threshold-free metric, named area under the receiver operating characteristic curve, which is not the threshold-needed "accuracy". PHP and AVOID's ROC curves are shown in **Figure 1 \(d\) of the one-page rebuttal pdf**. 4) Thanks for this helpful suggestion! The standard deviations have been included in Appendix H and we would add the standard derivation to the main paper's tables. 5) The CIFAR-10(ID)/CIFAR-100(OOD) dataset pair is a commonly used "hard task" pair in OOD detection tasks [1-5], since CIFAR-100 contains many unseen categories in CIFAR-10 and the image style and texture is pretty similar. [1] Morningstar, W., et al. Density of states estimation for out of distribution detection. [2] Nalisnick, E., et al. Detecting out-of-distribution inputs to deep generative models using typically. [3] Serrà, J., et al. Input Complexity and Out-of-distribution Detection with Likelihood-based Generative Models. [4] Cao, S., and Zhongfei Z. Deep hybrid models for out-of-distribution detection. [5] Fort, S., Jie R., and Balaji L. Exploring the limits of out-of-distribution detection. 6) Thanks for the helpful suggestion and we will improve the expression. **For question 1:** Sorry for the confusion. As shown in Figure 4, the sentence means that the deep-blue points (latent representation $z\sim q(z|x)$ of FashionMNIST) are much more distinguishable from the red points (MNIST) than the light-blue points (latent $z$ sampled from $\mathcal{N}(0,I)$) from the red points. **For question 2:** Actually, it would be used to alleviate the overestimation caused by factor 2, i.e., dataset entropy. Yes, we have the same thoughts as you that it could be better to balance the weights between the regularizer (calibration term) and the remainder of the objective, and we have already provided a scaled version of the calibration function in Section 4.3, whose scale score is adaptive on the entropy of ID dataset. **For question 3:** Sorry for the misunderstanding. The LSTM-based prior is learned after the VAE is trained instead of end-to-end training with the VAE together. VAE with a more complicated prior may help alleviate the overestimation problem but we think it is not efficient and not compatible with existing VAEs which are typically equipped with the efficient reparameterization trick based on a standard Gaussian prior. **For question 4:** Thanks for recommending this work. We will add the discussion of this paper to the paper since this work firstly provides an interesting and novel perspective from the gradients in doing OOD detection. But we find the performance of this method (with batch size B=1) is worse than our method and some SOTA VAE-based baselines, e.g., the AUROC in CIFAR-10(ID)/SVHN(OOD) is only 0.82. The superiority of this method is demonstrated when increasing the batch size to 5, but it is not suitable to our method and experimental setting. **For limitations:** See the last paragraph in the Author Rebuttal for all reviewers. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: * On weakness 1: I do not see how the conclusion in 180-183 " can be flexibly extended for other non-linear VAEs", please explain this more, I would like to find an agreement on this point. (9-11) states a linear VAE. Where do you show its extension to non-linear VAEs? * On weakness 2: Thank you for the explanation! To me, this term is yet not well motivated, and relies too much on it "being used in previous work". Also, it seems to be highly limited to image data, which limits the scope of the method. * On weakness: The explanations are very helpful, and should be used to clarify the experimental results in the paper. With these changes, and with the new results on standard deviations, the experimental results are improved. * The questions are well answered. --- Reply to Comment 1.1.1: Title: Clarification of the "extension" and the motivation of the DEC method Comment: We deeply appreciate your valuable feedback, which could significantly improve the quality of our paper. We hope our response below could adequately address your concerns and if there are any remaining concerns, we are more than willing to engage in further discussion with you. --- **Extension to other non-linear VAEs:** Sorry for the confusion about the statement "the conclusion can be flexibly extended for other non-linear VAEs", we feel like the confusion may be due to the unclear organization of section 3.2 and the unclear previous response. Without access to modify it now, please allow us to state the pipeline of section 3.2: * Part 1: "When the design of the prior is proper?": For a single-modal Gaussian data distribution, a linear VAE (stated in Eq. (9)) with its optimal parameters can satisfy $q(z)=p(z)$, where the overestimation issue will not occur. * Part 2: "When the design of prior is NOT proper?": We found that, for a multi-modal Gaussian data distribution, a linear VAE (same as stated in Eq. (9)) with its optimal parameters, which can still be obtained analytically, can **not** satisfy the condition $q(z)=p(z)$, which indicates the design of the prior is not proper and will lead to overestimation. * Part 3: "More empirical studies on the improper design of prior": Considering the fact that more VAEs in practice are non-linear, we try to investigate the overestimation of non-linear VAEs. However, it is hard and meaningless to get the analytical solution of the parameters for a single specific non-linear VAE, which inspires us to conduct a series of empirical studies on the cases of non-linear VAEs. Note that, we have included the results of non-linear VAEs with various network structures in Fig.3 ((a-b) is an MLP-based VAE for a synthesized 2D multi-modal dataset, and (c-d) is a CNN-based VAE for realistic image datasets), and can find that the condition $q(z)=p(z)$ can hardly be achieved because **it is difficult or even impossible to make $q_\phi(z|x)$ to be exactly the same as $p(z|x)$** through optimization in practice [1,2,3]. [1] Mihaela R., Balaji L., and Shakir M. Distribution matching in variational inference. [2] Dai, B., and David W. "Diagnosing and Enhancing VAE Models." [3] Dai, Bin, Li Kevin Wenliang, and David Wipf. "On the Value of Infinite Gradients in Variational Autoencoder Models." Based on part 3's findings and analysis, a **conclusion** is that these non-linear VAEs used in our paper, which are trained by optimizing ELBO, cannot satisfy the condition $q(z)=p(z)$ and lead to overestimation in practice. Please note that the related findings in paper [1,2,3] are also not specific to the neural network setting for modeling $q_\phi(z|x)$. Thus, **this conclusion** can be **flexibly "extended" for other non-linear VAEs** with different network structures (e.g., number of layers and network types), which helps us to achieve the conclusion in lines 180-183. --- **Motivation of DEC:** For the motivation of DEC, we also discussed it with reviewer GLsZ (please see https://openreview.net/forum?id=31zVEkOGYU&noteId=LXigKkXvo8), where we found there is a big misunderstanding of the relationship between image complexity and dataset entropy. Briefly, no matter what the relationship between image complexity and dataset entropy is, it won't affect the effectiveness of our DEC method. Because **we don't use image complexity to estimate dataset entropy** in the original paper (Sorry for our last non-rigorous response). The main contribution of our paper is to identify the root cause of the overestimation issue of a trained VAE and **SVD-based method is only one of the choices** of implementing the dataset entropy calibration (DEC) method. We choose the SVD-based DEC method because the **benchmarks** of the OOD detection mainly focus on the image datasets and the SVD-based DEC method could make it easier to be understood and testify. We believe that there will be other ideal implementation methods to provide more accurate discrimination between ID and OOD, and further improve the performance of OOD detection. But please note that factor 2 (higher ID dataset entropy)'s contribution to overestimation is **not restricted to the data type**. Besides, SVD could be applied to other "matrix" data types **beyond just image data**. High-dimension non-matrix data could also be transformed to be a matrix and then apply the SVD method. For low-dimension data, there could be some traditional density estimation methods like the Gaussian mixture model that can be helpful. Note that VAE and other Deep Generative Models are usually focused on high-dimension data. We deeply regret not having provided a clear clarification regarding the "extension" and "motivation of DEC" in our initial response. We hope that this follow-up response will adequately address your concerns. [A further discussion could be found here: https://openreview.net/forum?id=31zVEkOGYU&noteId=QSTNNYppKw ] --- Reply to Comment 1.1.2: Comment: Dear reviewer LEk9, We deeply appreciate the valuable time and effort you've taken to share your valuable insights and comments. Since your thoughts are of great importance to us, could we kindly request a small portion of your time to review our secondary response? We hope our response has clarified the meaning of "extension to other non-linear VAEs", the motivation of the developed method, and also its contribution to the field of unsupervised OOD detection. We are sincerely waiting for your feedback. Best regards, Authors --- Reply to Comment 1.1.3: Title: Thanks for your efforts and contributions during the discussion of the rebuttal! Comment: Dear reviewer LEk9, We would like to express our sincere gratitude once again for your efforts and contributions during the discussion of the rebuttal, which has significantly improved the quality of our paper. Upon realizing that the discussion seems to be able to continue beyond the rebuttal deadline, we kindly request a brief extension of your time to assess our responses to your remaining concerns. We found both concerns are due to some misunderstandings. Should any concerns persist, we are more than eager to engage in further discussion with you. Thanks again! Best regards, Authors
Summary: The paper studies unsupervised OOD detection (i.e., training data contains no labels) using deep generative models. DGMs model the probability distribution of the inputs, and can be an ideal candidate for unsupervised OOD detection. The authors study one specific class of DGMs, namely VAEs. They show that VAEs suffer from overestimation problem ($P(x_{ood}) > P(x_{id})$) due to two main reasons — dataset’s inherent entropy and improper design of prior distribution. The paper then proceeds to theoretically suggest ways to mitigate this issue, and shows experimental results that do so. Strengths: 1. The theory of the paper is simple but inspiring, and matches neatly with the designed algorithm. 2. The experiments are well-designed and executed. 3. The ablation studies are well-done. Weaknesses: 1. Prior work such as [1] that discusses causes of deep generating models’ (specifically, normalizing flows) reason for failure to perform OOD detection was not cited/discussed in the paper. Similarly, [2] is also an important paper for using DGMs for OOD detection that wasn’t cited. 2. The paper is not self-contained and the organization could be improved — for example, one could put the limitations in the main paper instead of in the appendix. 3. Notation of the paper. For example, $p(x) = N(x | 0, \Sigma_x)$ can be more readable as $x \sim N(0, \Sigma_x)$, following more commonly used convention. [1] Polina Kirichenko, Pavel Izmailov, Andrew Gordon Wilson. Why Normalizing Flows Fail to Detect Out-of-Distribution Data, https://arxiv.org/abs/2006.08545, 2020 [2] Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, and Balaji Lakshminarayanan. Detecting out-of-distribution inputs to deep generative models using a test for typicality. arXiv preprint arXiv:1906.02994, 2019. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What is the architecture used for the supervised OOD detection methods in Table 1? Is it a 3 layer MLP? 2. Intuitively, supervised OOD detection methods should do better than unsupervised ones, since having access to more information should never hurt the performance. However, that is not the case in Table 1. Do the authors have an explanation for this? 3. Line 278, why is the max epochs set to such a high value (1000)? Is there some sort of early stopping that is used here? 4. I am curious on how the reasons noted by [1], namely normalizing flows learning latent representations based on local pixel correlations and not semantic content, are relevant for VAEs. Does VAEs focus on semantic content and not local pixel correlations? [1] Polina Kirichenko, Pavel Izmailov, Andrew Gordon Wilson. Why Normalizing Flows Fail to Detect Out-of-Distribution Data, https://arxiv.org/abs/2006.08545, 2020 Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **For weakness 1:** Thanks for your suggestion and we will add these citations. **For weakness 2:** We're sorry for the non-self-contained organization and we will add the limitation to the main paper. **For weakness 3:** Thanks for your comment on the notation and we will improve it as you suggested. **For question 1:** For the supervised OOD detection methods, the performance is based on the network structures described in their original papers instead of a 3-layer MLP. Most of the supervised methods are based on a classifier, e.g., LN [9] is based on WRN-40-2 [1]. [9] Wei, Hongxin, et al. "Mitigating neural network overconfidence with logit normalization." International Conference on Machine Learning. PMLR, 2022. [1] Zagoruyko, Sergey, and Nikos Komodakis. "Wide residual networks." arXiv preprint arXiv:1605.07146 (2016). **For question 2:** This is not the truth. In the research area of supervised OOD detection, there is a common issue: neural networks are known to suffer from the overconfidence issue, where they produce abnormally high confidence for both in- and out-of-distribution inputs [9]. In other words, the classifier would abnormally assign wrong categories to OOD data. This is still a hard issue to be addressed by the supervised methods. We will add this explanation to the main paper. **For question 3:** We follow the same experimental setting with these SOTA VAE-based baselines (HVK [17] and $\mathcal{LLR}^{ada}$ [18]), where the max epoch is set as 1000 and the best neural network parameters are saved according to the best ELBO in the training set. For the early stopping, we found actually the ELBO is stable after 100~200 epochs with our model trained in FashionMNIST and CIFAR-10 (see the training curve of ELBO in **Figure 1 \(c\) of one-page rebuttal pdf**), which could be considered set as a proper early stopping epoch. **For question 4:** As the cause of the failure analyzed in paper [1], the flow model's learned local pixel correlations and generic image-to-latentspace transformations are not specific to the training image dataset. Since VAEs also focus on both semantic content (mainly in the image-to-latentspace process by the encoder $q_\phi(z|x)$) and local pixel correlations (mainly in the reconstruction process by decoder $p_\theta(x|z)$), similar phenomenon has been observed in VAEs: HVK [17] and $\mathcal{LLR}^{ada}$ [18] found that the reconstruction for ID and OOD data could be both high-quality, which means VAEs also learned some information that is not specific to the ID training dataset, i.e., low-level information instead of semantic information. Part of the success of the paper [1], HVK[17], and $\mathcal{LLR}^{ada}$ [18] are the same: forcing the model to learn ID training dataset's specific semantic information. In our paper, that is the PHP method, which learns the $q(z)=\int q_\phi(z|x)p(x)$ and then uses $D_{KL}[q(z|x)||q(z)]$ instead of $D_{KL}[q(z|x)||p(z)]$, since $p(z)$ is uninformative and could contain non-training-dataset information. --- Rebuttal Comment 1.1: Comment: This is an interesting paper overall, and I thank the authors for writing a comprehensive rebuttal to my concerns. The idea of decomposing the ELBO and addressing concerns of over-estimation is neat and ties the method of the paper nicely. Some of the citations [1-9] are missing. A few more comments/questions: 1. **(class conditioning)** The method is unsupervised in the sense that it does not use the in-distribution labels. To add to the comment by the authors on unsupervised vs supervised OOD detection, could learning class conditional distributions on the data help even more with OOD detection? i.e., could this paper's method be generalized where we can use the class labels? 2. **(weak baselines)** The paper compares with outlier exposure [1] as a supervised method using auxiliary data. However, outlier exposure, despite being a foundational method in this field, was published in 2019 and several direct improvements over it has been found. For example, energy based out-of-distribution detection [2] can be a stronger baseline. Same thing goes for supervised methods not using auxiliary data, for example, the authors compare against CP, which is the MSP method from [3]. However, energy score [2] and MaxLogit [4] has been known to perform better. 3. **(Energy score)** Is EN in table 1 the same as energy score method? It cites the same paper. 4. **(Error bars in table 1 or 2)** The paper does not report error bars in the table. While Appendix H does mention average error bars, it is unconventional **to the best of my knowledge**, and individual comparisons without associated error bars is meaningless. 5. **(Comparison to transductive setting)** Since the method uses $n_{id}$ and $n_{x}$ for test example $x$ to calculate the calibration term $C(x)$, it has similarities to transductive/semi-supervised methods that use auxiliary datasets containing OOD examples. Comparisons to the settings, ideas and results for ERD and binary classifier [5], WOODS [6], DCM (Transductive setting) [7], where the test dataset is used to update the models/scores and get improved OOD detection performance would be important for the paper. 6. **(Over-estimation in supervised OOD detection)** The authors mention the overconfidence issue on supervised learning setup as well, and cite [8]. I think mentioning this in the paper itself, drawing parallels between the unsupervised and supervised case, and methods that people use for mitigating the overconfidence issue in the supervised case [7][8], would be important. 7. **(ID/OOD data pair construction)** The authors use CIFAR-10 and STL-10 as an ID/OOD data pair. This blurs the line of ID vs OOD, i.e., what is the definition of OOD data? I think it is common to take datasets that do not contain any common classes between them as (ID, OOD) pair, see the dataset construction in [9]. Reviewer LEk9 also complains about this issue, and while CIFAR-10 and CIFAR-100 have non-overlapping classes and is often considered a hard-OOD task, I would argue CIFAR-10 and STL-10, sharing 9 out of 10 classes between them, should not suffice as an (ID, OOD) paper. This also brings the question of **what is OOD detection** as asked by [9]. The practical use of OOD detection is to have a sort of conservative deferring mechanism [1, 7], where given a test example x, one either makes a classification on it, or in case it does not belong to any of the K classes the model knows, it is deferred to an expert. While this no longer works in this paper's unsupervised setting, consider the image of a cat from CIFAR-10, $x_1$, and the image of a cat from STL-10, $x_2$. Other than resolution characteristic (32 x 32 for $x_1$ vs 96 x 96 for $x_2$) why should $x_1$ be classified as ID and $x_2$ as OOD? **Based on all these questions/comments, I am keeping my original score.** [1] Deep Anomaly Detection with Outlier Exposure, https://arxiv.org/abs/1812.04606 [2] Energy-based Out-of-distribution Detection, https://arxiv.org/abs/2010.03759 [3] A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks, https://arxiv.org/abs/1610.02136 [4] Scaling Out-of-Distribution Detection for Real-World Settings, https://arxiv.org/abs/1911.11132 [5] Semi-supervised novelty detection using ensembles with regularized disagreement, https://arxiv.org/abs/2012.05825, Official implementation: https://github.com/ericpts/ERD [6] Training OOD Detectors in their Natural Habitats, https://proceedings.mlr.press/v162/katz-samuels22a/katz-samuels22a.pdf, Official implementation: https://github.com/jkatzsam/woods_ood [7] Conservative Prediction via Data-Driven Confidence Minimization, https://arxiv.org/abs/2306.04974, Official implementation: https://github.com/tajwarfahim/dcm [8] Mitigating Neural Network Overconfidence with Logit Normalization, https://arxiv.org/abs/2205.09310 [9] No True State-of-the-Art? OOD Detection Methods are Inconsistent across Datasets, https://arxiv.org/abs/2109.05554 --- Reply to Comment 1.1.1: Title: Further response (part 1/2) Comment: Thanks for your valuable feedback! We hope our response below could adequately address your concerns and if there are any remaining concerns, we are more than willing to engage in further discussion with you. --- **ID/OOD data pair construction:** We want to answer this question regarding "what is OOD detection" in unsupervised OOD detection at first. The OOD data in the unsupervised OOD detection task can arise not only due to semantic differences (category information) but also from statistical differences. For example, a vertically flipped (VFlip) in-distribution testing sample is also considered as an OOD sample for the in-distribution testing sample itself [1, 2], though they share the same category information. And we also add experiments on the "VFlip" case in the discussion with the reviewer ogin (https://openreview.net/forum?id=31zVEkOGYU&noteId=unI5jvIInm), where the experiments could support our analysis of the latent factor and effectiveness of the PHP method. [1] Choi, Hyunsun, Eric Jang, and Alexander A. Alemi. "WAIC, but Why? Generative Ensembles for Robust Anomaly Detection." [2] Morningstar, Warren, et al. "Density of states estimation for out of distribution detection." International Conference on Artificial Intelligence and Statistics. PMLR, 2021. **class condition:** Yes, the paper's method could be flexibly generalized when the class labels are available. Since the class label contains rich semantic information, it could be used to provide more expressive latent data representations and could further improve the OOD detection performance of our PHP method. **weak baselines:** Thanks for your helpful suggestion and we would add the results of these methods to Table 1. Since our method focuses on unsupervised OOD detection, we cite most of the results directly from the reported results in two SOTA unsupervised OOD detection baselines [3, 4]. We would carefully go through the results in Table 1 and update it with the latest performance for methods in the category of the "supervised" and "auxiliary". We note that the results in the category "unsupervised" is already the SOTA performance and share the same experimental setting as our method. **Energy score:** Yes, the performance of this energy-based method could achieve very good performance in the CIFAR(ID)/SVHN(OOD) pair (could even achieve 99.41% at the setting of applying fine-tuning with WideResNet as shown in their original paper), where we also directly cite the result reported in [4]. [3] Havtorn, Jakob D., et al. "Hierarchical vaes know what they don’t know." International Conference on Machine Learning. PMLR, 2021. [4] Li, Yewen, et al. "Out-of-distribution detection with an adaptive likelihood ratio on informative hierarchical vae." Advances in Neural Information Processing Systems 35 (2022): 7383-7396. **Error bars in Table 1 or 2**: Thanks for your recommendation of adding individual comparisons with associated error bars for Table 1 or 2. We are very sorry that we did not have enough time to provide a detailed Table 2 in the first rebuttal stage. We choose Table 2 since the unsupervised OOD detection baselines including ELBO, HVK, and $\mathcal{LLR}^{ada}$ are the most related baselines to our methods. To obtain these error bars, all the experiments have been conducted 5 times with random seeds from 1 to 5. Note that, the DEC method is an SVD-based method containing no stochasticity. From the experimental results shown in the tables, all these unsupervised OOD detection methods are relatively stable. | FashionMNIST(ID) / MNIST(OOD) | | || | --- | ------ | ------- | ----- | | Method | AUROC $\uparrow$ | AUPRC $\uparrow$ | FPR80 $\downarrow$| | ELBO [25] | 23.5 $\pm$ 0.820 | 35.6 $\pm$ 0.859 | 98.5 $\pm$ 0.389 | | HVK [17] | 98.4 $\pm$ 0.798 | 98.4 $\pm$ 0.734 | 1.3 $\pm$ 0.042 | | $\mathcal{LLR}^{ada}$ [18] | 97.0 $\pm$ 0.583 | 97.6 $\pm$ 0.723 |0.9 $\pm$ 0.039 | | -ours | | | | | PHP | 89.7 $\pm$ 0.548 | 90.3 $\pm$ 0.507 | 13.3 $\pm$ 0.249 | | DEC | 34.1 $\pm$ 0.000 | 40.7 $\pm$ 0.000 | 92.5 $\pm$ 0.000 | | AVOID | 99.2 $\pm$ 0.516 | 99.4 $\pm$ 0.605 | 0.0 $\pm$ 0.009 | | CIFAR-10(ID) / SVHN(OOD) | | || | --- | ------ | ------- | ----- | | Method | AUROC $\uparrow$ | AUPRC $\uparrow$ | FPR80 $\downarrow$| | ELBO [25] | 24.9 $\pm$ 1.418 | 36.7 $\pm$ 1.522 | 94.6 $\pm$ 0.965 | | HVK [17] | 89.1 $\pm$ 2.323 | 87.5 $\pm$ 2.967 | 17.2 $\pm$ 2.005 | | $\mathcal{LLR}^{ada}$ [18] | 92.6 $\pm$ 0.411 | 91.8 $\pm$ 0.542 | 11.1 $\pm$ 0.277 | | -ours | | | | | PHP | 39.6 $\pm$ 1.379 | 42.6 $\pm$ 1.533 | 85.7 $\pm$ 0.691 | | DEC | 87.8 $\pm$ 0.000 | 89.9 $\pm$ 0.000 | 17.8 $\pm$ 0.000 | | AVOID | 94.5 $\pm$ 1.440 | 95.3 $\pm$ 1.487 | 4.24 $\pm$0.365 |
Summary: In the context of VAE, the authors identified two factors that potentially cause VAE to assign higher likelihood to OOD data than ID data. They propose a new scoring mechanism that improves upon VAE's overestimation of the likelihood on OOD samples. Strengths: - Decomposing the ELBO carefully is interesting. In particular, they give a new prior design targeting the overestimation issue. - They have a scoring method that improves upon the standard ELBO, which partially validifies their analysis. Weaknesses: - the derivation assumes the model distribution can converge exactly to the true one, but this is impractical. If it does, there should be no overestimation issue to begin with (for practical datasets that are arguably separable, e.g. SVHN vs CIFAR). Moreover, even if it is possible in theory, the empirical and theoretical observations in [1, 2] will prevent this from happening in practice. If it doesn't, the derivation will leave an error gap that is not analyzed. In short, the key reasoning in the above is that real distribution is often supported on low dimensional sets, while model distribution is fully supported. - the evaluation is a bit outdated on easier benchmarks. To solidifies AVOID's practical impact, evaluation on the harder tasks as in DoSE [3] is necessary. [1] Dai, Bin, and David Wipf. "Diagnosing and Enhancing VAE Models." International Conference on Learning Representations. 2018. [2] Dai, Bin, Li Kevin Wenliang, and David Wipf. "On the Value of Infinite Gradients in Variational Autoencoder Models." Advances in Neural Information Processing Systems. 2021. [3] Morningstar, Warren, et al. "Density of states estimation for out of distribution detection." International Conference on Artificial Intelligence and Statistics. PMLR, 2021. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I'm happy to miss something in the paper, and be corrected. See the weakness above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your insightful comments, the following is our response. **For weakness 1:** We absolutely agree with your point that the model distribution can hardly converge to the data distribution in practical VAEs, and there does exist a third term in ELBO to affect the performance of ELBO-based OOD detection. However, we need to claim that this error gap can only be alleviated **during the training** of VAEs but our paper focuses on improving the OOD detection performance of VAEs **after training** via simple and universal methods, which are agnostic to the training scheme or model architecture of different VAEs. Moreover, the derivation in our paper is for readers to easily understand our method (note that the analytical posteriors of $p_\theta(z|x)$ can be obtained in some Gaussian cases to make the model distribution equal to the data distribution) and we have provided a more rigorous derivation in the following to clarify your concerns (the original derivation has been included in Appendix C.1). The relationship between $p_{\theta}(x)$ and $\text{ELBO}(x)$ is: $$\log p_\theta(x)=\mathbb{E}\_{z\sim q_\phi(z|x)}[\log p_\theta(x|z)]-D_{KL}[q_\phi(z|x)||p(z)]+D_{KL}[q_\phi(z|x)||p(z|x)]=\text{ELBO}(x)+D_{KL}[q_\phi(z|x)||p(z|x)].$$ Assuming that the ground truth of the ID data distribution is $p(x)$, and if we expect $\text{ELBO}(x)$ to converge exactly to $p(x)$ (where we'd acknowledge no overestimation issue), then two assumptions must be satisfied: 1) the encoder $q_\phi(z|x)$ should make $D_{KL}[q_\phi(z|x)||p(z|x)]=0$; 2) the decoder $p_\theta(x|z)$ should make $p_\theta(x)=\int p_\theta(x|z)p(z)dz=p(x)$. We strongly agree that these assumptions are hard to achieve but our methods are **NOT** based on these two assumptions, i.e., our methods are focusing on a trained VAE, and the analyzed two factors' contribution to overestimation is **NOT** affected by the two assumptions, though it does have an error gap to be analyzed. To be more rigorous, a **trained** VAE has the following property (we will correct the corresponding equations in the main paper): $$\mathbb{E}_{x{\sim}{p(x)}}[\text{ELBO}(x)] = -\mathcal{H}(x) - D\_{KL}[q(z)||p(z)] + \text{const}_p,$$ where $\text{const}_p$ is a const that is only related to the $p(x)$ after finishing training the VAE and would not be changed by our PHP and DEC methods. Now, we give a detailed derivation for $\text{const}_p$: For each term in $\mathbb{E}\_{x{\sim}{p(x)}}[\text{ELBO}(x)]$ we could have $$\mathbb{E}\_{x{\sim}{p(x)}}[\mathbb{E}\_{z{\sim}{q_\phi(z|x)}}\log{p_\theta(x|z)}]=\mathbb{E}\_{p(x)q_\phi(z|x)}[\log\frac{p_\theta(z|x)}{p(z)}p(x)]=\mathcal{I}\_{q,p}(x,z)-\mathcal{H}(x);$$ $$\mathbb{E}\_{x{\sim}{p(x)}}[D_{KL}(q_{\phi}(z|x)||p(z))] = \mathbb{E}\_{p(x)q_{\phi}(z|x)}[\log\frac{q_\phi(z|x)}{q(z)}\frac{q(z)}{p(z)}]=\mathcal{I}\_q{(x,z)} + D_{KL}(q(z)||p(z)),$$ where $$\mathcal{I}\_{q,p}(x,z)=-\mathcal{H}\_{q,p}(z|x)+\mathcal{H}\_{q,p}(z)=\mathbb{E}\_{p(x)q_\phi(z|x)}[\log p_\theta(z|x)]-\mathbb{E}\_{q(z)}[\log p(z)];$$ $$\mathcal{I}\_{q}(x,z)=-\mathcal{H}\_{q}(z|x)+\mathcal{H}\_{q}(z)=\mathbb {E}\_{p(x)q_\phi(z|x)}[\log q_\phi(z|x)]-\mathbb{E}\_{q(z)}[\log q(z)].$$ Further, we have $$\mathbb{E}\_{x{\sim}{p(x)}}[\text{ELBO}(x)] = [\mathcal{I}\_{q,p}(x, z) - \mathcal{I}\_q(x,z)] - ({\mathcal{H}(x) + D_{KL}(q(z)||p(z))}),$$ where $\mathcal{I}\_{q,p}(x,z)$ will gradually approach $\mathcal{I}\_q(x,z)$ in the process of optimizing ELBO. Importantly, when $\theta$ and $\phi$ are fixed after training, thus $[\mathcal{I}\_{q,p}(x, z) - \mathcal{I}\_q(x,z)]$ is a const, i.e., $$\text{const}\_p = \mathcal{I}\_{q,p}(x, z) - \mathcal{I}\_q(x,z),$$ and would not be changed during applying PHP (replacing $D_{KL}[q_\phi(z|x)||p(z)]$ to $D_{KL}[q_\phi(z|x)||\hat{q}_{id}(z)]$, note that $I_q(x,z)$ is not related to $p(z)$) and DEC (add a calibration term $\mathcal{C}$) method. We admit that there is an error gap brought by $\text{const}\_p$ in ELBO to influence the performance of ELBO-based OOD detection, and it can hardly be optimized to zero because it is difficult to achieve the above two assumptions for a usual non-linear VAE. However, the influence of this term will become more and more slight with the introduction of more powerful neural networks and optimization algorithms, and it is also beyond the investigation scope of this paper. We emphasize that our method only focuses on how to alleviate VAE's overestimation issue and improve the OOD detection performance **after training**, and believe that the performance of our method can be further improved as long as ELBO can be better optimized during training. Performance of the additional experiments on "harder tasks" (weakness 2) could also empirically support the above analysis and demonstrate the effectiveness in alleviating overestimation of addressing factors 1 and 2 by our method. --- **For weakness 2:** Thanks for your bringing these hard tasks into our sight. The "hard tasks" in DoSE [3] refer to the ID/OOD dataset pairs: 1) FashionMNIST(ID) / MNIST(OOD); 2) CIFAR-10(ID) / SVHN (OOD); 3) CelebA(ID) / CIFAR-10/100(OOD); 4) CIFAR-10(ID) / CIFAR-100(OOD) as described in "dataset" part of Appendix C in the full DoSE paper ( https://www.alexalemi.com/publications/dose.pdf). As shown in Tables 1, 2, and 3 of the main paper, our experiments have already included the "harder" dataset pairs 1, 2, and 4. Thus, we add the experiments for the dataset pair "CelebA(ID) / CIFAR-10/100(OOD)" with the comparison between ELBO (standard VAE), two SOTA VAE-based methods (HVK, $\mathcal{LLR}^{ada}$), and our method (AVOID) in the **Table 1 of "one-page rebuttal pdf"**. Additionally, we testify our method in more datasets with the VAEs trained in CelebA in the **Table 2 of "one-page rebuttal pdf"**. The experimental results demonstrate the effectiveness of our method, which mitigates the phenomenon of overestimation with two potential factors in ELBO. --- Rebuttal Comment 1.1: Title: The hard cases in DoSE refer to the VFlip or HFlip versions, not just the pairs Comment: As the title suggests, please refer to Table 1 in DoSE carefully. VFlip and HFlip are vertical and horizontal flips of the original test data. So OOD differs from IID by only one latent factor. Not going through them carefully weakens your statements. --- Reply to Comment 1.1.1: Title: Misunderstanding is caused by that "simple" and "hard" tasks have been previously defined in the DoSE Comment: **We sincerely apologize for our misunderstanding of the concept of the "harder task."** Actually, it is exactly due to that **we are too "carefully"** focused on going through the DoSE paper, where the "simple" and "hard" have been clearly defined in DoSE paper. We directly cite the original description below in their AISTATS'21 paper's page 12 (https://www.alexalemi.com/publications/dose.pdf): >"Many of these dataset pairings are **“simple”,** in that likelihood alone would be a reasonable rule to detect OOD data. However, there are several **“hard”** OOD dataset pairings identified by previous work. FashionMNIST→MNIST and CIFAR10→SVHN were both identified as difficult dataset pairings by Nalisnick et al. [2019a]. Additionally, Nalisnick et al. [2019b] identified CelebA→ CIFAR10/100 and CIFAR10→CIFAR100 to be particularly difficult pairings. The latter is particularly difficult, since both are subsets of the 80 million tiny images dataset [Torralba et al., 2008], but have non-overlapping class labels." Thus, as you especially emphasize the "harder" task, we **wrongly followed DoSE authors' definition** of it and added the experiments on "CelebA(ID) / CIFARs(OOD)". Please allow us to express our apologies again. For comparisons on "VFlip and HFlip", we pretty much appreciate your thoughtful comments "OOD differs from IID by only one latent factor", which could be an interesting perspective for evaluating the PHP method. In this case, the DEC method is totally useless since the calibration term $\mathcal{C}(x)$ of a data sample is the same as its flipped version, and AVOID's performance will be the same as the PHP's performance. However, we can only partly agree with this about the "VFlip", where **"HFlip" seems to be very weird and meaningless in some datasets**. Take the CelebA dataset for example, how can we know whether an unseen testing sample (a "face") is horizontally flipped or not? Therefore, we found that our PHP method could generally improve the OOD detection performance on the "VFlip" experiments but no significant improvement on the "HFlip" experiments as shown below. | AUROC of "VFlip" | CelebA | CIFAR10 | SVHN | FashionMNIST | MNIST | | ------------- | ------ | ------- | ----- | ------------ | ------ | | ELBO of VAE [25] | 74.2 | 49.5 | 50.4 | 69.5 | 82.7 | | PHP (=AVOID) | 85.7 | 53.7 | 52.7 | 86.2 | 84.9 | | AUROC of "HFlip" | CelebA | CIFAR10 | SVHN | FashionMNIST | MNIST | | ------------- | ------ | ------- | ----- | ------------ | ------ | | ELBO of VAE [25] | 49.6 | 50.5 | 50.6 | 68.4 | 83.4 | | PHP (=AVOID) | 50.1 | 50.4 | 50.5 | 70.2 | 85.3 | The experiments on "VFlip" could support our analysis of the latent factor and the effectiveness of the PHP method. We sincerely value your insightful feedback, as it could substantially enhance the quality of our paper. We hope our response could adequately address your concerns and if there are any remaining concerns, we are more than willing to engage in further discussion with you. --- Reply to Comment 1.1.2: Comment: Dear reviewer ogin, We greatly appreciate your continued engagement in the discussion. We have included your mentioned "hard" tasks in our secondary response. We sincerely hope these experimental results can address your concerns and demonstrate the effectiveness of our developed method. If there are any remaining concerns, we are more than willing to engage in further discussion with you. Best regards, Authors
Rebuttal 1: Rebuttal: **For all reviewers: Introduction to additional experiments in the attached "one-page rebuttal pdf"** First of all, we would like to extend our sincere gratitude to all the reviewers for their meticulous reviews, thoughtful comments, and valuable suggestions. Their feedback has greatly contributed to the improvement and clarity of this manuscript. We deeply appreciate the time and effort invested in guiding our work. We've conducted the following experiments in response to the feedback from the reviewers: **1. Evaluating proposed methods on "harder tasks":** As suggested by **reviewer ogin**, we add comparisons between our methods and other VAE-based OOD detection methods on the "harder tasks", i.e., detecting CIFAR-10/100 as OOD with VAEs trained on CelebA (ID). The experimental setup remains consistent with that detailed in Section 5.1 of the original paper. The results are presented in **Table 1** of the attached one-page rebuttal pdf file. As indicated in Table 1, our methods could still effectively alleviate the overestimation and improve OOD detection performance in these harder tasks. **2. Evaluating proposed methods on more OOD datasets with VAEs trained on CelebA:** Following the above experiment, we testify the proposed methods' OOD detection performance on more OOD datasets in **Table 2** of the attached one-page rebuttal pdf file. The results demonstrate that our methods could generally alleviate the overestimation and improve the OOD detection performance. **3. Exploring the effects of dataset size and model capacity in alleviating overestimation:** In response to feedback from reviewers **LEk9** and **GLsZ**, we investigated the influence of dataset size (amount of training data) and model capacity (number of neural network layers) on the OOD detection performance of ELBO, using both the synthesized 2D multi-modal dataset and realistic image datasets ("FashionMNIST(ID) / MNIST(OOD)" and "CIFAR-10(ID) / SVHN(OOD)"). Our findings are illustrated in **Figure 1 (a-b)** and **Table 3** of the attached one-page rebuttal PDF. For the 2D multi-modal dataset, we sampled a data volume 10 times greater than its inherent distribution p(x) than the original configuration seen in Figure 3(a-b) of the main paper, increasing from 10,000 to 100,000 training samples. The VAE for this experiment utilized a 10-layer MLP as opposed to the original 3-layer MLP. Notably, the results from Figure 1(a) highlight that the $q_{id}(z)$ is still not equal to p(z) = N (0, I) and Figure 1 (b) indicates indicates the persistence of the overestimation problem in the non-linear deep VAE. For the practical image datasets, we varied the dataset size and model capacity (number of CNN layers) to investigate their effects on ELBO's OOD detection performance. However, results show that increasing the amount of data and the number of CNN layers does not yield significant improvements. **4. Training curve:** In response to the concern raised by reviewer **Bzqi** regarding the number of training epochs, we have illustrated the training curve of the negative ELBO in **Figure 1 \(c\)** of the attached one-page rebuttal pdf file, based on a VAE trained on the CIFAR-10 dataset. These results are drawn from five random runs with distinct seeds. The negative ELBO rapidly decreases within the initial 200 epochs and subsequently stabilizes in the following epochs. **5. ROC curve and corresponding AUROC value:** In addressing the concerns of reviewer **LEk9** about the AUROC value for PHP, we've depicted the ROC curve for PHP, AVOID, and ELBO using the "FashionMNIST(ID) / MNIST(OOD)" dataset pair. This can be viewed in **Figure 1(d)** of the attached one-page rebuttal PDF. Additionally, we've included a **discussion addressing the frequently raised concerns about the limitations of our work**: Identifying the underlying root cause in all DGM methods which suffer from the overestimation issue could be very challenging now, since the training paradigms and model architectures vary significantly across DGMs, e.g., the training objective of flow models is the exact marginal likelihood but VAE's training objective is an evidence lower bound (ELBO). But this direction is very attractive and we would keep on researching it. Pdf: /pdf/9cc8645e02e7bd751f7352a6a01fbca87cd5a589.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Evolving Connectivity for Recurrent Spiking Neural Networks
Accept (poster)
Summary: This study presents an application of a previously developed approach, NES, to train RSNNs formulated based on connection probabilities. However, there are concerns regarding the ethical aspect of this work, specifically the reproducibility and proper citation of related works (see Weaknesses). Due to these reasons, it is challenging to recommend this manuscript for acceptance in its current form. Strengths: n/a Weaknesses: o The lack of code availability at the time of review poses a significant barrier to verifying the claims made by the authors. o Additionally, it appears that an existing work by Stockl et al., 2021 from Maass group, which applies NES to learn RSNNs formulated in terms of probability skeleton inspired from Billeh et al., is not cited in this manuscript. This omission warrants clarification as it may influence the perceived contribution of this work. o Beyond the ethical issues outlined above, this paper has several limitations that need to be addressed. Firstly, the results presented are purely empirical and there is a lack of theoretical contributions. Secondly, the manuscript lacks an analysis of how varying the sample size impacts the results, which leaves unanswered questions regarding the robustness and efficiency of the proposed methodology. Lastly, it is unclear whether the authors have performed a thorough tuning of the hyperparameters, which could significantly impact the performance of the benchmarks. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: n/a Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: See weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[W1] Code availability** **[Response]** Thanks for your advice, but we cannot agree on code availability as the main reason to reject. According to the conference policy in Call for Papers, NeurIPS strongly **encourage** accompanying code and data to be submitted with **accepted papers** when appropriate, rather than a mandatory requirement. And, we have already promised in Appendix A that we will release the code upon the paper is accepted. However, to address your concern, we are glad to send a copy of the code to the Area Chair. --- **[W2] An similar existing work by Stockl et al., 2021 using NES** **[Response]** We appreciate your attention to the existing work by Stockl et al., 2021, which applies NES to learn RSNNs formulated in terms of probability skeleton inspired by Billeh et al. We acknowledge the relevance of this work and will include it in our revised manuscript. However, our EC framework has several fundamental differences from the approach taken by Stockl et al., 2021 and NES, which we detail below: 1. **Hyperparameter searching vs. parameter training**. Stockl et al. search tens to hundreds of free parameters to characterize the connection skeleton, akin to hyperparameter optimization in deep learning and HyperNEAT in evolutionary computation. In contrast, our EC framework operates more like training 1-bit neural networks in deep learning, searching every one-to-one connection parameter in the RSNN, and is agnostic to network architecture. In our experiments, we train the full 193K RSNN connections using EC, which represents more than $1000\times$ the number of dimensions than the 44 to 164 hyperparameters for the probabilistic model explored by Stockl et al. 2. **Discrete vs. continuous**. Our work presents a novel approach to the NES framework by utilizing a 1-bit discrete search space formulation, which distinguishes it from the original NES paper (xNES, SNES, Wierstra et al., 2011) and its derivatives like ES (Salimans et al., 2017). These prior works employed continuous search spaces parameterized by normal distributions to train real-valued parameters in neural networks, as seen in conventional deep learning. Our EC framework also offers several unique advantages: 1. **High performance**. Our results demonstrate that EC outperforms ES on training RSNNs, despite EC-RSNN using 1-bit discrete parameters, which are 1/32 the size of the 32-bit floating point precision employed by ES-RSNN. 2. **Faster training & inference**. As discussed in our paper, the 1-bit connections resulting from the EC framework offer potential for accelerated training and inference. In Fig. 5, we show that EC-RSNN exhibits $2 \sim 3\times$ efficiency compared to ES-RSNN. 3. **Scaling to complex tasks**. EC's ability to efficiently search the full 193K RSNN parameter space enables it to tackle complex tasks that require a larger number of network parameters. The Humanoid task is a challenging locomotion task in reinforcement learning community, typically addressed by neural networks with parameter scales ranging from 10K to 100K, as demonstrated by Salimans et al. and Freeman et al. Our EC framework can solve this task with performance comparable to deep RNNs. --- **[W3-a] Lack theoretical contributions** **[Response]** The focus of our manuscript is on the development and evaluation of the evolving connectivity (EC) framework for training RSNNs, emphasizing its hardware-friendly characteristics and performance. We do not consider it as a requirement to add theory to empirical experiments. It would be helpful if you can provide some detailed theoretical insights. --- **[W3-b] Impact of sample size on robustness and efficiency** **[Response]** In response to your concern regarding the robustness and efficiency of our proposed methodology, we have taken the following steps to address these concerns: 1. As shown in our paper, to evaluate the robustness of our method, we performed multiple trials (n=3) with different random seeds for each experiment. By reporting the average and standard deviation of the results, we demonstrate the consistency of our approach across various initial conditions. 2. To assess the generalization of our methodology, we conducted experiments using 1-bit deep RNN architectures, which further supports the adaptability of our approach to other network configurations. 3. In terms of efficiency, we have compared our method's performance with existing methods by measuring the wall-clock time, providing evidence for the improved efficiency of our approach. We acknowledge the importance of analyzing the impact of sample size on the results; However, we believe our current analysis sufficiently demonstrates the robustness and efficiency of our proposed methodology. --- **[W3-c] Hyperparameters tuning and benchmark** **[Response]** We appreciate your concern about the thoroughness of hyperparameter tuning, as it is crucial for the performance of the benchmarks. We assure that we have conducted an extensive tuning of the hyperparameters for all baseline models. 1. The PPO implementation in Brax (Freeman et al., 2021) obtains 11,300 for the Humanoid environment. Our baselines (SG-RSNN, ES-GRU, and ES-LSTM) yielded returns of 11,500, 13,000, and 15,000, respectively, outperforming the Brax benchmark. 2. As detailed in our paper, we tested various surrogate function parameters for SG-RSNN and selected the best set as our baseline. 3. We have conducted additional experiments, as in Supplement PDF Fig. S4 and S5, including PPO-LSTM, PPO-GRU, and a broader hyperparameter tuning on SG-RSNN, to confirm that our baseline is fairly constructed. Our rigorous tuning process contributed to the enhanced performance of the baselines, and we are confident in the thoroughness and effectiveness of our hyperparameter optimization. --- The detailed experimental settings and results can be found in the comment text and Supplemental PDF in the overall rebuttal. --- Rebuttal Comment 1.1: Title: Response acknowledged Comment: I appreciate the additional experiments and details that the authors provided to enhance the robustness, addressing many of my primary concerns. I also value the discussion contrasting with Stockl et al. While including code during the review process would have improved transparency, I trust the authors to provide a copy to the area chair, alleviating this concern. With these issues addressed, I will raise my score above the acceptance threshold. --- Reply to Comment 1.1.1: Comment: Thanks for your positive response to our rebuttal. We will try our best to make our paper more clear in the next version.
Summary: Facing on the inaccurate and unfriendly limitation of current surrogate gradient-based learning methods for recurrent spiking neural networks (RSNN), this study develops the evolving connectivity (EC) framework for inference-only training. The EC framework reformulates weight-tuning as a search into parameterized connection probability distributions, and employs Natural Evolution Strategies (NES) for optimizing these distributions. The performance of the proposed EC is evaluated on a series of standard robotic locomotion tasks, where it achieves comparable performance with deep neural networks and outperforms gradient-trained RSNNs. Strengths: 1. The motivation is reasonable and the application scene is interesting. 2. The framework of EC considering the weight reparameterization and connection evolution is reasonable. Weaknesses: 1. It seems that the weight-based parameterization method and the NES method used in EC is not novel, it seems to use the method proposed in other papers. 2. The energy consumption should be computed to show the efficiency of the proposed framework. 3. The experiments seems not enough to verify the effectiveness of the proposed model. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. The novelty of the weight-based parameterization method and the NES methods in EC framework should be explained further. 2. The energy consumption should be computed to show the efficiency of the proposed framework. 3. How about the performance comparison with Transformer models? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: See the weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[Q1]** *The novelty of the weight-based parameterization method and the NES methods in EC framework should be explained further.* **[Response]** Thank you for your advice. Our work introduces a novel approach to the NES framework by utilizing a 1-bit discrete search space formulation, which distinguishes it from the original NES paper (xNES, SNES, Wierstra et. al) and its derivatives like ES (Salimans et. al). Prior studies employed continuous search spaces parameterized by normal distributions to train real-valued parameters in neural networks, as seen in conventional deep learning. Our proposed method searches for discrete 1-bit connection-based parameterization within the NES framework, deriving a natural gradient estimator tailored for this discrete space. This approach offers several advantages over the continuous parameter search using ES. As demonstrated in our paper's experiments, our EC method yields superior final performance and accelerates the training process by 2 to 3 times. Additionally, the use of integer arithmetic for 1-bit values, as opposed to floating-point arithmetic for continuous values, reduces computational costs and enhances compatibility with neuromorphic hardware. --- **[Q2]** *The energy consumption should be computed to show the efficiency of the proposed framework.* **[Response]** Thank you for your valuable suggestion. In our study, all experiments are conducted on the same NVIDIA Titan RTX GPU, operating at a consistent 100% GPU power (280W). As a result, the energy consumption of the training process is estimated proportional to the computation wall time as shown in Table 1. Our EC framework demonstrates an energy improvement over SG and even more than 2 times to ES. We will incorporate energy consumption calculations in Section 6 to further demonstrate the efficiency. *Table 1: Estimated training power comsumption* | Model | Run time (h) | Power Consumption (KWh) | |---------|--------------|-------------------------| | EC-RSNN | 18 | 5.0 | | ES-RSNN | 46 | 12.9 | | SG-RSNN | 20 | 5.6 | --- **[Q3]** *How about the performance comparison with Transformer models?* **[Response]** Thank you for your question. Although Transformer models have demonstrated remarkable performance in numerous domains, their direct application to online reinforcement learning (RL) presents challenges. Specifically, conventional Transformer models encounter stability issues in RL tasks and require modifications for stabilization (Emilio Parisotto et al. 2019). We appreciate your suggestion and consider it a promising avenue for future research. --- Rebuttal Comment 1.1: Comment: Thanks for the explaination and the additional experiments. It looks like the energy consumption of the EC-RSNN over SG-RSNN is not so advantagous, it is strange since EC is claimed to utilize a 1-bit discrete search space, please explain the reason further. The transformer on online reinforcement learning have been explored further in recent years, such as Zheng, Q., Zhang, A., & Grover, A. (2022, June). Online decision transformer. In international conference on machine learning (pp. 27042-27059). PMLR. --- Reply to Comment 1.1.1: Comment: **[Comment 1]** *It looks like the energy consumption of the EC-RSNN over SG-RSNN is not so advantageous, it is strange since EC is claimed to utilize a 1-bit discrete search space, please explain the reason further.* **[Response]** Thank you for your insightful comment. We appreciate the opportunity to further clarify the energy consumption comparison between EC-RSNN and SG-RSNN. There are two primary reasons for it: 1. **Comparison Standard.** Since epoch in SG and generation in EC are different concepts, our initial comparison computed the final energy consumption under the similar total computation time, which did not account for the difference in the final return. In fact, EC-RSNN achieved a higher final return (i.e., 13,808) compared to SG-RSNN (i.e., 11,505). To provide a more comprehensive comparison of energy consumption, we have included an additional evaluation based on the energy consumption required to 'solve' the Humanoid task. We have set the 'solve' return thresholds at 11,300, which corresponds to the PPO return reported by the Brax maintainer. As illustrated in Table 1, when comparing the energy consumption at the same return thresholds, EC-RSNN demonstrates a more advantageous energy efficiency than SG-RSNN. This comparison highlights the benefits of utilizing a 1-bit discrete search space in EC-RSNN and provides a clearer understanding of the energy consumption differences between the two models. | Model | Return | Run time (h) | Power Consumption (KWh) | |:---------:|:------:|:------------:|:-----------------------:| | EC-RSNN | 11,300 | 6 | 1.7 | | SG-RSNN | 11,300 | 20 | 5.6 | 2. **GPU Implementation.** It is noteworthy that GPUs have significantly expedited floating-point operations, a prevalent data type within the deep learning community. However, 1-bit is not a built-in data type in JAX and GPU, like most computational platforms. In our current JAX code, we could only utilize the boolean data type as a substitution, whose efficiency is limited compared to a 1-bit implementation. Despite this implementation difficulty, we achieved 3x efficiency with the same return. It demonstrated that our EC-RSNN can have a potentially more significant advantage compared to SG-RSNN. In the future, we will further try to implement our EC-RSNN on specially designed neuromorphic devices supporting 1-bit implementation. --- **[Comment 2]** *The transformer on online reinforcement learning has been explored further in recent years, such as Zheng, Q., Zhang, A., & Grover, A. (2022, June). Online decision transformer. In the international conference on machine learning (pp. 27042-27059). PMLR.* **[Response]** Thanks for your insightful suggestion. The paper you referenced blends offline pretraining and online finetuning; it does not entirely adopt the framework of online learning. Actually, offline reinforcement learning has become a prevailing approach within the domain of transformer models (Lili Chen et al., 2021; Michael Janner et al., 2021). In contrast, online learning remains an area of ongoing research, primarily due to the persisting challenges related to stability (Emilio Parisotto et al., 2019). It is worth highlighting that all of our conducted experiments are grounded in the paradigm of online learning, thus rendering direct comparisons with pre-trained transformers incongruous. Despite this distinction, we intend to explore it and run additional experiments in the future. --- Rebuttal Comment 1.2: Comment: Thanks for your responses. I appreciate the addidtional experiments, however, the power consumption is not compared with ANN models, it makes it not convinced to show the energy efficiency advantage of SNNs. Is there any other ways to compare the energy consumption for the comparison? --- Reply to Comment 1.2.1: Comment: Thank you for your question. To further demonstrate our energy efficiency, we have conducted computations pertaining to the estimated energy consumption of EC-RSNN when implemented on the Loihi chip [1]. The data we used are presented in the table below. *Table: Energy data from Loihi [1] & Network data from EC-RSNN* | Parameter | Value | |---------------------------------------|-----------| | Energy per synaptic spike op $P_s$ | 23.6 (pJ) | | Within-tile spike energy $P_w$ | 1.7 (pJ) | | Energy per neuron update $P_u$ | 81 (pJ) | | # Generations $G$ | 1000 | | # Population $P$ | 10240 | | # Time steps $S$ | 33200 | | # Neurons $N$ | 256 | | # Spikes per neuron per step $R$ | 0.025 | | # Connection per neuron $C$ | 128 | | # Update operations per neuron $I$ | 4 | Firstly, we calculated the estimated energy consumption of one network. $$ E_{one} = P_u * N * I * S + (P_s + C * P_w) * N * R * S = 2.8mJ $$ Then, we calculated the total energy consumption during training. $$ E_{tot} = E_{one} * G * P = 28 kJ $$ Our ANN baselines (ES/PPO-LSTM/GRU) demand several hours of training time when executed on GPUs, consequently leading to energy consumption on the order of megajoules (MJ). Our calculations, however, indicate a noticeable reduction in power consumption by an order of magnitude of approximately $1\sim2\times$. Please note that these computations are estimates and serve as a preliminary evaluation. In the future, we intend to conduct experiments using neuromorphic chips to empirically substantiate the disparity in energy consumption between EC-RSNN and RNNs. [1] Loihi: A Neuromorphic Manycore Processor with On-Chip Learning, 2018.
Summary: The paper describes an evolutionary training algorithm to optimize the binary weight matrix of a recurrent spiking neural network. The optimization algorithm is derived using natural evolutionary strategies assuming that the weights follow a Bernoulli distribution with parameter $\rho$. The resulting weight update is remarkably simple, and despite that, it seems to work very well. The algorithm is tested on the classical locomotion tasks of simulated robotics, and the performance is reported in terms of task return and time to solution in wall clock time. On all the task RSNN model trained in this way performs better or similar to the LSTM or GRU baselines trained with ES, and it is always better than the spiking networks trained with surrogate gradient. By leveraging the discrete values of the weight matrix, the implementation is also faster than a floating point evolutionary strategy. Strengths: As said in the paper: inference-only training solutions can be very useful for hardware where the training algorithm cannot be implemented easily. This approach is likely to have important for training or fine-tuning architectures when deployed on unconventional devices. I would also highlight an important achievement that the authors did not mention enough in my opinion: it appears that this training algorithm is better than all existing training algorithms for spiking networks which is remarkable for something that is practicable and competitive with deep learning solutions to such problems. It could in fact provide opportunities for quantized and spiking architectures which are otherwise bounded in their performance by the approximate gradients that are usually computed with surrogate gradient and straight through. This is solved here since the gradients are unbiased (not mentioned by the authors as far as I see). This was probably why surrogate gradient baselines do not quite reach the performance of LSTM and GRU baselines as seen here as in previous papers, but the algorithm provides a solution to this problem. Weaknesses: 1) There are a few data points that would be useful to fully evaluate the quality of the training algorithm and separate the benefits of the spiking architecture from the benefits of the algorithm. a. Is it possible to train an LSTM or a GRU with EC? What is its return after training, is that also better than ES for the spiking neurons? b. What is the return obtained with LSTM / GRU and PPO? This is both important to verify that PPO is well-implemented and claim or not that the EC-RSNN is competitive with the regular deep learning approach. (If the performance is low, then maybe there is a problem with the surrogate implementation too). 2) I think it would be valuable to clarify that most surrogate gradient techniques were typically tested mainly without recurrent connections (Slayer, super-spike for instance). I find it admirable that the authors tested so many baselines already, but also unfortunate that they might have missed important details that were reported the be crucial in the presence of recurrent connections: depending on the weight initialization it was shown that the pseudo derivate should be multiplied by a dampening factor < 1 to avoid the accumulation of approximation errors, with Glorot initialization you might be fine without this (see Celotti and Rouat 2022). But what initialization did you use and did you test this? 3) I realized only later that most of the concrete hyperparameters are detailed in the appendix: How many layers, units per layer, batch size, epsilon. Please state clearly in the main text that all this important is available in the appendix when it's relevant. I find it in general even better to put directly these numbers in the main text when possible. (For instance, write $\epsilon = 0.001$ instead of $\epsilon \rightarrow 0$). Same remark when talking about the hardware. Instead of writing "GPGPU" or "identitical hardware" which I find vague since neuromorphic hardware is often mentioned in the main text. When possible, I would encourage writing just NVIDIA GPU and pointing out the appendix where the hardware reference is provided. On a related note, I would also encourage the authors to prefer a specific reference to the Figure number and panel at line 277 when commenting on their results. 4) In the context of this conference which is broadly addressed to the machine learning community, I think it is important that the surrogate gradient is nothing more than a variant of the straight-through gradient estimator from Bengio 2013 which is much better known outside of the neuromorphic community and was discovered before. 5) I find the first sentence line 20 a bit strange, and I find it weird to reference Pei et al. at this point. AGI is also never used elsewhere in the text (for good reasons since you have more concrete and good results to discuss), so why defining this acronym here ? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: I realized that points 1a.b and 2 above would be best addressed with extra simulations. Given the time and effort that it requires, I do not consider those simulations necessary, but I would encourage the authors to address those questions verbally during the rebuttal and consider the feasibility and the cost/benefits of these potential simulations at some point. For points 3, 4 and 5. I would encourage the authors to do minor edits. More general question: Do the authors believe that this approach can replace gradient descent learning in some other context ? Or is it that ES and EC and bounded to make only sense in this type of toy reinforcement leanring task where the network size is small and the batch size can be super large? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: I see no potential negative impact. The limitations are clearly stated in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[S1]** *The gradients of the EC training algorithm are unbiased.* **[Response]** Thank you for your affirmation and suggestion. We have also pointed out that "the surrogate gradient leads to inherent inaccuracy in the descent direction" in line 37. We will further emphasize the contribution of our unbiased gradient property in the introduction of our revised manuscript. --- **[W1-a]** *Is it possible to train an LSTM or a GRU with EC? What is its return after training? Is that also better than ES for spiking neurons?* **[Response]** Thank you for your insightful question. While training an LSTM or GRU with EC is theoretically possible, it requires scaling and tweaking the gating mechanism to adapt to 1-bit weights (Xuan Liu et al., 2018). For simplicity, we conduct experiments using a vanilla RNN trained with EC. As a result, we obtained 12,455 returns with EC-RNN, compared to 11,042 returns with ES-RNN and 11,133 returns with PPO-RNN. Detailed results can be found in Fig. S3 in the supplementary PDF. The results demonstrate that EC can also train 1-bit deep recurrent neural networks, showcasing its potential for different architectures and quantized neural networks. We will add these experiment results in Section 6. --- **[W1-b]** *What is the return obtained with LSTM / GRU and PPO?* **[Response]** Thank you for your question. We train PPO-LSTM and PPO-GRU with the 12,960 and 14,312 average final return, while Brax (Freeman et al., 2021) report a PPO return of approximately 11,300. As a result, our EC-RSNN achieves a comparable final return (i.e., 13,808) with PPO, demonstrating competitive performance with deep reinforcement learning. More detailed results can be found in Fig. S4 of the Supplemental PDF. We will add these experiment results in Section 6. --- **[W2]** *Miss important details that were reported the be crucial in the presence of recurrent connections: Depending on the weight initialization it was shown that the pseudo derivate should be multiplied by a dampening factor $< 1$ to avoid the accumulation of approximation errors. But what initialization did you use and did you test this?* **[Response]** Thanks for your advice. We conduct further hyperparameter tuning for our SG-RSNN baseline, introducing a dampening factor equal to $0.8$. Our experiment shows a similar maximum performance (i.e., 11,765) with the SG-RSNN baseline (i.e., 11,546) from our paper. The detailed results can be seen in Fig. S5 in the supplementary PDF. We will replace this experiment setup in Section 6. In terms of initialization, all parameters are initialized to $0.5$, as a balance point between connection and no connection in the Bernoulli distribution. For a fair initialization, we moved the $1/\sqrt N$ term in LeCun Normal initialization to a product of hyperparameters, as shown in Table 1 below. We will add the initialization in Section 4 in our revised paper. *Table 1: Initialization* | Hyperparameter | Value | |------------------------------------|------------------------------------------------| | Input membrane resistance $R_{in}$ | $0.1 * \tau_m * \sqrt{2/d_{in}}$ | | Hidden membrane resistance $R_h$ | $1.0 * \tau_m / \tau_{syn} * \sqrt{2/d_{h}}$ | | Output membrane resistance $R_{out}$ | $5.0 * \tau_{out} * \sqrt{2/d_{h}}$ | --- **[Q2]** *For weaknesses points 3, 4 and 5. I would encourage the authors to do minor edits.* **[Response]** Thank you for your valuable suggestions. In response to [W3] and [W4], we will move the main information about hyperparameters and hardware to the main text. Meanwhile, we will further polish our statements to be more clear. As for [W5], we recognize that the reference to AGI in line 20 may seem out of place. In our revised manuscript, we will provide a more appropriate introduction to neuromorphic computing, ensuring a clear and concise presentation of the topic. --- **[Q3]** *Do the authors believe that this approach can replace gradient descent learning in some other context? Or is it that ES and EC and bounded to make only sense in this type of toy reinforcement learning task where the network size is small and the batch size can be super large?* **[Response]** Thanks for posing this thought-provoking question. Firstly, humanoid locomotion is a challenging task in reinforcement learning (RL), as it has high dimensional observation and action space (Duan et al., 2016), and poses a challenge to deep RL algorithms (Haarnoja et al., 2018). Extending beyond locomotion tasks, we believe that the EC framework has the potential to be useful in other contexts, such as image classification (Xingwen Zhang et al., 2017) and game playing (Salimans et al., 2017), where evolutionary algorithms have demonstrated success. Secondly, under large batch size conditions, our EC is much more efficient than ES and SGD. On the one hand, our EC can be efficiently distributed over multiple devices by sending random seeds, while SGD requires computing an average of the gradient of all batches incurring significant communication costs. On the other hand, the 1-bit connections in the EC framework may reduce memory and computation costs to 1/32 compared to the FP32 commonly used by both ES and SGD. Moreover, EC requires only a forward pass, while SGD needs both a forward and backward pass. Namely, the 10,240 batch size in EC is analogous to $\frac{10240}{2 \times 32} = 160$ in SGD in memory usage, which is comparable to commonly-used SGD batch sizes such as 128 and 256. In summary, we believe that the EC framework holds promise for a broader range of applications and is capable of training larger networks with larger batches. We will explore different tasks and the efficiency of the EC framework in future work. --- Rebuttal Comment 1.1: Title: Thank you Comment: Thank you for clarifying things and running additional simulations. I find indeed that the comparison between PPO-RNN and EC-RNN are compelling. These are great additions for the paper, the choice of dampening factor 0.8 is not necessarily standard (I had seen 0.3 for initialization $1/\sqrt{n}$) but the results in the non spiking RNN suggest that the stability of the gradient estimator would not explain the failure of PPO, so I am now convinced. I had forgotten one question, do you subtract the mean or any baseline to the return R in equation (14). I believe this should yield a gradient estimator with lower variance? If not, do you know why is it not necessary? --- Reply to Comment 1.1.1: Comment: Thanks for your professional question. Alternatively, we adopted a similar fitness shaping trick, center rank transform, as discussed in ES (Tim Salimans et al., 2017) and NES (Daan Wierstra et al., 2014). The transform sort the return of a population, then rescale the sorting indexes into a fixed interval $[-0.5, 0.5]$. Evidently, it yields an outcome with zero mean and fixed variance. Moreover, it reduces the influence of the outlier individual and helps with stability. We will include the details in our revised paper. Additionally, during the research process, we tried several other fitness shaping methods on population return, including no rescaling, subtraction of mean, and rescaling to fixed interval $[-0.5, 0.5]$. We found out that these methods can train EC, but resulting in a lower final return and often come with stability issues.
Summary: The authors present a new evolutionary algorithm, evolving connectivity (EC) to train recurrent spiking neural networks (RSNN). The algorithm alleviates the gradient estimation problem of surrogate gradient-based (SG) training methods which are often hard to implement in neuromorphic hardware. Further, by focusing the algorithm only on evolving network connectivity instead of the weight magnitude, EC can reduce all network weights down to 1bit {0,1} alternatives making it further hardware friendly. The paper also highlights the superior accuracy and performance of their training algorithm for a few robotics tasks involving sequential decision making against state-of-the-art Surrogate Gradient and Evolution Strategies based algorithms. Strengths: The authors present a strong argument for developing RSNNs which focus more on the connectivity probability between layers instead of selecting specific weight magnitudes. This enables the network to implicitly fit many samples without overfitting to any specific subset of data. The paper explains the EC algorithm clearly and highlights justifications for its superiority in performance and accuracy compared to other algorithms (ES/SG) for targeted robotics benchmarks. Removing the gradient estimation requirement and reducing the weight matrix to 1-bit are significantly useful optimizations that make the algorithm hardware friendly. Weaknesses: The paper doesn’t clearly describe the RSNN architecture that is adopted for the experiments discussed and how it compares in terms of number of neurons/connections to the Deep RNNs that are used as baseline for performance/accuracy. This makes the comparisons to the ES-RNN results difficult to understand for me. Further, comparing the RSNN based experiments only (ES/EC/SG), it appears that only EC algorithm is 1-bit owing to the focus on connectivity only. But this makes me wonder if 1-bit networks applying ES/SG algorithms could compete against EC approach in terms of performance. I understand that by the authors’ assertion that connectivity is more critical than assigning wide ranging weights to a fixed set of connections. However, for experiment’s sake I am curious to understand that if ES/SG algorithms also had the HW friendly assumption of only creating a binary SNN then would their accuracy/performance improve or deteriorate? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Describe the RSNN architecture and how it compares to the RNNs used in Experiments in terms of number of layers/params/model size/precision? 2. Experiment with 1-bit RSNN utilizing ES/SG algorithms for training to understand the impact on accuracy and performance? 3. The combination of ES and RSNN performs notably well on all the benchmarks tested, but it would be good to understand what part of it comes from the network architecture (try another architecture), what part comes from bit precision and lastly what is contributed by the training algorithm? This kind of study will help strengthen the credibility of EC algorithm for other benchmarks and network architectures. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors provide a useful discussion on the memory footprint of the different training algorithms discussed. Please address the concerns related to network architecture and precision impacting accuracy and performance as discussed in weakness section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[Q1]** *Describe the RSNN architecture and how it compares to the RNNs used in Experiments in terms of number of layers/params/model size/precision?* **[Response]** Thanks for your valuable question. The table below outlines the similarities and differences between RSNN and baseline models in terms of the number of layers, hidden size, number of parameters, precision, and model size. To ensure a fair comparison, our RSNN (EC) architecture has a similar input-hidden-output structure and number of parameters to the RNNs used in the experiments. The primary difference lies in the precision, with RSNN (EC) utilizing 1-bit precision, while other baseline models employ FP32 precision. Consequently, RSNN (EC) has a smaller model size. In our final version, we will include this table in Section 6.1 and further elaborate on the architecture information. *Table 1: Comparison of Model Architectures* | Model | Hidden size | # Hidden Layers | # Params | Precision | Size (KB) | |---------------|-------------|-----------------|----------|-----------|-----------| | RSNN (EC) | 256 | 1 | 193K | 1-bit | 24 | | RSNN (Others) | 256 | 1 | 193K | FP32 | 768 | | GRU | 256 | 1 | 386K | FP32 | 1544 | | LSTM | 128 | 1 | 191K | FP32 | 764 | ---- **[Q2]** *Experiment with 1-bit RSNN utilizing ES/SG algorithms for training to understand the impact on accuracy and performance?* **[Response]** Thanks for your advice. Despite ES and SG being continuous optimization methods built for continuous parameters, we attempted to discretize ES and SG for training 1-bit connection RSNNs proposed in our paper. Specifically, we adopt ES and SG to optimize a continuous parameter $\theta$ and discretize them to 1-bit weights $\textbf{W}$. We use threshold at 0 as $\textbf{W}=H({\theta})$, where $H$ is the Heaviside step function. For SG, we additionally used the straight-through estimator as a common practice (Bengio et al. 2013). Please note that these approximations and discretization may result in biased gradients for the 1-bit discrete optimization. The results in Table 2 show that ES-RSNN (1-bit) and SG-RSNN (1-bit) exhibit learning progress but still remain a significant gap to EC-RSNN (1-bit). Besides, ES and SG performed better on continuous FP32 weights compared to discrete 1-bit. This suggests that continuous optimization methods excel with continuous parameters. For 1-bit discrete connections, EC should be adopted as it is specifically designed for discrete 1-bit optimization and provides unbiased gradients. Detailed results can be found in Fig. S3 in the supplementary PDF. We will add these experiment results in Section 6. *Table 2: Comparison of Model Architectures* | Algorithm | Precision | Return | |-----------|-----------|--------| | EC-RSNN | 1-bit | 13808 | | ES-RSNN | 1-bit | 10240 | | SG-RSNN | 1-bit | 6067 | | ES-RSNN | FP32 | 11264 | | SG-RSNN | FP32 | 11505 | ---- **[Q3]** *Network architecture and precision impact accuracy and performance.* **[Response]** Thanks for your suggestion. To validate the EC on more network architectures, we conducted experiments using a vanilla RNN trained with EC, meanwhile using ES and PPO as a baseline. The results can be seen in Table 3 and Figure S2 in the supplementary PDF. It demonstrates that EC can effectively train 1-bit deep recurrent neural networks and has the potential for different architectures and quantized neural networks. *Table 3: Vanilla RNN* | Algorithm | Precision | Return | |-----------|-----------|--------| | EC-RNN | 1-bit | 12455 | | ES-RNN | FP32 | 11042 | | PPO-RNN | FP32 | 11133 | Regarding the impact of precision, Table 2 highlights that ES and SG, not specifically designed for 1-bit precision, encounter optimization challenges, which negatively affect performance. In contrast, EC, designed to optimize 1-bit connections, is not hindered by these challenges and outperforms the RSNN baselines. This evidence supports the robustness and adaptability of the EC approach across varying network architectures and precision levels. --- Rebuttal Comment 1.1: Title: Response to Author Rebuttal Comment: Thanks to the authors for providing detailed feedback on the questions discussed in the review. A few comments on the response: 1. The breakdown of network parameters and memory footprint is very helpful. 2. I appreciate the effort to try out the suggested experiment, I do see a significant degradation in the performance of SG baseline on moving to the 1-bit precision. While the degradation for ES baseline is smaller, its performance is still lagging behind the proposed EC. Also this seems to highlight the difference in the optimization approach (continuous variable vs connectivity) and their performance for the same task. 3. Very useful data on Vanilla RNN training as well, thanks for generating it. In conclusion, I am satisfied with the authors responses and will increase my score by one point. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful feedback. We will take your valuable suggestions into consideration and refine our paper accordingly.
Rebuttal 1: Rebuttal: Thanks for all your suggestions. In light of them, we gathered commonly asked questions and ran a comprehensive set of additional experiments, as detailed below. The result figures are shown in the supplementary PDF. All our additional experiments are configured using the same settings and hyperparameters as in our paper and Appendix A, tested on the most complex Humanoid task (17-DoF) in Brax (Freeman et al., 2021), and averaged over 3 independent random seeds, with standard deviation shown as shaded areas. 1. **EC vs. NES?** The primary distinction is discrete vs. continuous, as demonstrated in Fig. S1. Our work presents a novel approach to the NES framework by utilizing a 1-bit discrete search space formulation, which distinguishes it from the original NES paper (xNES, SNES, Wierstra et al., 2011) and its derivatives like ES (Salimans et al., 2017). These prior works employed continuous search spaces parameterized by normal distributions to train real-valued parameters in neural networks, as seen in conventional deep learning. 2. **Can EC work on other 1-bit networks?** To verify EC's capabilities for training deep RNNs, we conducted experiments using a vanilla RNN trained with EC, meanwhile trained with ES and PPO as baselines. The RNN has 256 tanh units in the hidden layer. For 1-bit EC, the weight magnitudes are 0-1 connection matrix multiplied by LeCun initialization standard deviation, and the weight signs are determined by $+$ for excitatory, $-$ for inhibitory, with the first 128 neurons as excitatory and last 128 as inhibitory. For ES and PPO as baselines, we use real-valued weights with LeCun normal initialization. The results in Fig. S2 demonstrate that EC can effectively train 1-bit deep recurrent neural networks and has the potential for different architectures and quantized neural networks. 3. **Can ES or SGD/SG work on 1-bit RSNN?** Despite that ES and SG are continuous optimization methods built for continuous parameters, we attempted to discretize ES and SG for training 1-bit connection RSNNs proposed in our paper. Specifically, we adopted ES and SG to optimize a continuous parameter $\theta$ and discretize them to 1-bit weights $\textbf{W}$ by thresholding at 0 as $\textbf{W}=H(\theta)$, where $H$ is the Heaviside step function. For SG, we used the straight-through estimator as a common practice (Bengio et al., 2013). Please note that these approximations and discretization may result in biased gradients for the 1-bit discrete optimization. Fig. S3 (a) shows that ES-RSNN (1bit) and SG-RSNN (1bit) exhibited learning progress but were outperformed by EC-RSNN (1bit). Figs. S3 (b) and (c) indicate that ES and SG performed better on continuous FP32 weights compared to discrete 1-bit. This suggests that continuous optimization methods excel with continuous parameters, and for 1-bit discrete connections, EC should be adopted as it is specifically designed for discrete 1-bit optimization and provides unbiased gradients. 4. **Is PPO correctly implemented?** To ensure our PPO implementation's accuracy, we trained PPO-LSTM and PPO-GRU and compared with EC-RSNN (Figure S4). The Brax maintainers reported a PPO return of approximately 11,300. Our PPO implementation yielded higher results, with 12,960 (PPO-LSTM) and 14,312 (PPO-GRU) average final return, confirming its correctness and performance. Notably, EC-RSNN achieved a higher final return than PPO-LSTM, demonstrating competitive performance with deep reinforcement learning. 5. **Is SG correctly implemented?** We conducted additional experiments using different dampening factors to thoroughly test our surrogate gradient baseline, as demonstrated in Figure S5. With a dampening factor of $\gamma=0.8$, our experiment showed a similar maximum performance to the SG-RSNN baseline from our paper. In conclusion, our additional experiments reinforce our findings that EC is an effective novel training framework for RSNNs and that our baselines are robustly constructed. Pdf: /pdf/140cc42fe1bc70e426834bb24efd23c7fada6ec9.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Mutual Information Regularized Offline Reinforcement Learning
Accept (poster)
Summary: A long withstanding problem in offline RL is the distribution shift issue, i.e., the query of the action values for out-of-distribution state-action pairs. This paper proposes to consider the mutual information (MI) between states and actions. Specifically, the authors view state and action as two random variables, and constrains the policy improvement direction by the state-action MI. For the practical implementation, the authors introduce the MISA lower bound of state-action pairs and adopt MCMC techniques to construct an unbiased gradient estimation for the proposed MISA lower bound. The authors also unify TD3+BC and CQL under the proposed MISA framework. Empirically, the proposed method performs well on several datasets from the D4RL benchmark. Strengths: 1. The paper is clear and easy-to-follow in the derivation of the MISA lower bounds. 2. The practical implementation and hyperparameter choice are clearly discussed. 3. The discussion on the connection with TD3+BC and CQL is informative. Weaknesses: 1. The discussion is generally vague on why the proposed regularization method MISA is better than the prior approaches. The authors do discuss it in the paper, such as in the paragraph titled "Intuitive Explanation on the Mutual Information Regularizer". Those discussions are, however, general subjective and hard-to-follow. Some examples of the vague statements includes: * "directly fitting the policy on the dataset is short-sighted" (what is "short-sighted" and why?), * "optimization direction" (is it the gradient?), * "make sure in-distribution data have relatively higher value estimation" (not sure where does this statement come from). In Section 4.4, the author also mention that the propose method can "give a better mutual information estimation" (and thus better than CQL). But why does a better estimation of MI lead to better policy performance? 2. Experimental results: Both Table 1 and Table 2 (main table) do not contain the error bar. This make it hard to judge the significance of the improvement over prior methods. 3. The proposed method is highly-related to well-established methods in bounding/estimating the MI and KL (e.g., the $f$-divergence and DV representation of KL). The novelty of the proposed method is therefore less shining, but obviously is not a major weakness. 4. The paper may need re-organization so that the algorithm box for the main algorithm (Algo. 1) can show up in the main paper. For example, is MISA-$f$ really needed since in Eqn. (10) MISA is based on the DV representation? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. [L42-43] "the improved policy is unconstrained and might still deviate from the data distribution." I don't quite understand this sentence. I think both "forcing the learned policy to stay close to the behavior policy" and "generating low value estimations for OOD actions" constraint the policy improvement, towards either the behavior policy or the more-likely state-actions in the dataset. Besides, deviating from the data distribution may not be bad in offline RL, especially when the behavior policy is sub-optimal. Am I missing something? 2. [L43-44] I don't quite understand the statement "directly constrain the policy improvement direction to lie in the data manifold?" Maybe it is an wording issue, but I don't understand the meaning of a "improvement direction lie in a manifold." 3. What is a "data manifold"? This phase appears many times in the paper and is important to the main contribution of this paper, but I don't see a clear definition of it? Are you referring it to the support of the behavior policy's stationary state-action distribution? 4. [L148-149] "it is natural to learn a policy that can recover the dependence between states and actions produced by the behavior agent." While this statement is totally correct, wouldn't this approach the same as prior work that regularize the learning towards the behavior policy? Furthermore, can you elaborate more on why "By regularizing the agent with $I(S;A)$ estimation, we ... avoid being over-conservative and make sufficient use of the dataset information"? 5. [L187-188] Can you elaborate more on why you can use the Q-network $Q_\phi(s,a)$ as the discriminator $T_\phi(s,a)$ in Eqn. (10)? AFAIK, in the DV representation, $T_\phi$ should be selected over a sufficiently large function class. Why you can choose this function class to the the set of functions satisfying the Bellman equation (by the $J_Q^B(\phi)$ term in Eqn. (12))? Will using a separate neural network for the discriminator $T_\phi(s,a)$, such as in [1], lead to a better or worse performance? And, if worse, why? 6. Is it computationally demanding to use MCMC methods to sample from $p_{\theta, \phi}(a|s)$? How do you choose the hyperparameters in the MCMC method? How does the running time or compute of the proposed method compared with the prior work? [1] Yang, Shentao, et al. "A Unified Framework for Alternating Offline Model Training and Policy Learning." arXiv preprint arXiv:2210.05922 (2022). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: I do not find a discussion of the limitations in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed and valuable feedback and suggestions for further improvements. We would like to address the concerns as follows. > why does a better estimation of MI lead to better policy performance? Intuitively, estimating the mutual information $I(S; A)$ of the dataset encourages the RL policy to correctly utilise the mutual dependence of states and actions as described in the dataset, while performing its policy improvement and evaluation. A tighter mutual information bound guarantees a more accurate estimation of the mutual dependence of states and actions, thus allows a better conservative policy optimisation. We empirically validate this claim in Table 1 and Section 5.2. > why the proposed regularization method MISA is better than the prior approaches As Eqn. 14 suggests, the MISA lower bound can be interpreted maximising the log-likelihood of a non-parametric policy $\pi^*_{\theta, \phi} = E{\left[ \log{\pi_\theta(a\mid s) e^{Q_\phi(s, a)}} - {\log E_{a’\sim \pi_\theta(a’\mid s)} \left[e^{Q_\phi(s, a’)}\right]} \right]}$. Given the current estimated $\phi$ and $\theta$, $\pi^*(a\mid s)$ is a closed-form optimal policy to produce a higher return, as discussed in MPO [1] and AWR [2]. On the contrary, other bounds, e.g., BA with $\arg\max_\theta E_{s, a\sim {D}}\left[ \log \pi\theta(a\mid s) \right]$ that maximises the **log-likelihood of the current policy on the dataset**. This constraint solely considers the present policy without instating any regularization on the enhanced policy $\pi^*_{\theta, \phi}(a\mid s)$. As a result, it is **short-sighted**. MISA allows the agent to explore more space as long as the improved policy is in-distribution. We will improve our clarity in our future revisions. > Both Table 1 and Table 2 (main table) do not contain the error bar. We apologise for the mistake. We retrieved the standard deviations from our past logs and reported only the main results in the attached PDF in our general response due to the space and time limit. We find MISA is generally stable across the domains. We will fix both Table 1 and 2 in our revisions. > is MISA-f really needed since in Eqn. (10) MISA is based on the DV representation? We introduce MISA-$f$ as a variant under our general mutual information regularised offline RL formulation. It works as empirical evidence to support our claim that a tighter mutual information bound leads to better performance in Section 5.2 and Table 1. This observation could potentially facilitate future research in exploring other mutual information bound variants. > What is a "data manifold"? … Are you referring it to the support of the behavior policy's stationary state-action distribution? Yes, indeed, we are referring to it. We apologize for any confusion and will rectify this in the final version. > why "By regularizing the agent with $I(S; A)$ estimation, we ... avoid being over-conservative and make sufficient use of the dataset information"? As discussed in the first question, different from regularising the learning policy by maximising the log-likelihood of current policy $\log \pi_\theta (a\mid s)$, MISA imposes the constraint by maximising the log-likelihood of an one-step improved policy $\pi^*_{\theta, \phi} = E_{s, a\sim D}\left[\log \pi_\theta(a\mid s)e^{Q_\phi(s, a)} - \log E_{\pi_\theta (a'\mid s)}\left[e^{Q_\phi(s, a')}\right]\right]$. This allows the agent to better explore the state-action space during policy optimisation, as long as the improved policy is supported by the behaviour policy. We will further improve our presentation in the final manuscript. > Can you elaborate more on why you can use the Q-network $Q_\phi(s, a)$ as the discriminator $T_\phi(s, a)$ in Eqn. (10)? We thank the reviewer for the valuable question. We choose this formulation because it also helps to directly impose the constraint onto the Q function, rather than an implicit constraint over $T_\phi(s, a)$. We conducted additional experiments to demonstrate the performance of training $T_\psi(s, a)$ in lieu of $Q_\phi(s, a)$. The results are available in the supplementary PDF attached in our general response. In short, we observed that employing $T_\psi(s, a)$ destabilizes training and yields worse overall performance. These findings will be incorporated into our future revisions. > Is it computationally demanding to use MCMC methods to sample from $p_{\theta, \phi}(a\mid s)$? How do you choose the hyperparameters in the MCMC method? How does the running time or compute of the proposed method compared with the prior work? We agree that MCMC might slow down the training. Thus, as discussed in the appendix, we chose a set of parameters to minimise the overhead of MCMC: burn-in step of 5, step size of 1, and leapfrog steps of 2. This ends up with a less accurate distribution but slightly faster sampling, and empirically, we still find it improves the final performance as discussed in Table 1, compared with MISA-biased. We benchmarked the training speed of MISA with and without MCMC. | | MISA-no-MCMC | MISA | |:---------------------:|:------------:|:----:| | Iterations Per Second | 81.3 | 25.6 | We can see that MCMC indeed slows down MISA. However, the MCMC sampling steps happen during training. During testing, only the policy will be used, which is as fast as any other model-free policy methods. Thus, MCMC wouldn’t affect the deployment. If one wants to improve the training speed for fast prototyping, MISA-biased is fast and also outperforms baselines. We will also discuss this issue in the limitations. References: [1] Abdolmaleki, A., Springenberg, J. T., Tassa, Y., Munos, R., Heess, N., & Riedmiller, M. (2018). Maximum a posteriori policy optimisation. arXiv preprint arXiv:1806.06920. [2] Peng, X. B., Kumar, A., Zhang, G., & Levine, S. (2019). Advantage-weighted regression: Simple and scalable off-policy reinforcement learning. arXiv preprint arXiv:1910.00177. --- Rebuttal Comment 1.1: Title: Response to authors Comment: Dear authors, Thank you so much for the responses and additional experiments. Both of them are helpful to clarify my concerns. I will increase my rating from 4 to 6. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, We are pleased that our responses have effectively addressed your concerns. We extend our gratitude once more for your insightful comments and valuable suggestions. We will certainly integrate these inputs into our revisions. Warm regards, Authors
Summary: The authors propose a new offline RL method called MISA. Similar to prior work in offline RL, MISA constrains the learned policy to lie within the offline data manifold and does so by maximizing a lower bound on the mutual information between states and actions in the dataset. The authors consider three different bounds on lower information and generally find that a tighter bound leads to better performance. The authors connect their work with prior methods in offline RL. They evaluate MISA on a wide variety of different environments and find that MISA generally achieves superior performance compared to prior methods. Strengths: - The paper is well-motivated and theoretically sound, deriving three different lower bounds on mutual information between states and actions. - The authors connect their work with prior work in offline RL (TD3+BC, CQL). - The experimental evaluation is very thorough. The authors run experiments on a large number of environments and compare to a large number of baselines. The authors also conduct informative ablation studies on factors such as choice of mutual information lower bound, biased vs. unbiased gradient estimation, and number of Monte-Carlo samples. Weaknesses: - To support the authors' claim in line 272 that tighter mutual information bounds lead to better performance, it would be nice to show a plot of numerical values of the different mutual information estimates (Ba, MISA-$f$, MISA-DV, MISA) to see if the bounds in line 274 hold in practice. - Are the values of $\gamma_1$ and $\gamma_2$ in Equations (12) and (13) specified anywhere? Are these hyperparameters that need to be tuned for each environment? Or is the proposed method robust to choice of $\gamma_1$ and $\gamma_2$. I'm curious how the authors trade off maximizing the RL objective and the Mutual Information objective, and how different choices of $\gamma_1$ and $\gamma_2$ affect performance. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - I may have missed this, but are the values of $\gamma_1$ and $\gamma_2$ in Equations (12) and (13) specified anywhere? Are these hyperparameters that need to be tuned for each environment? Or is the proposed method robust to choice of $\gamma_1$ and $\gamma_2$. I'm curious how the authors trade off maximizing the RL objective and the Mutual Information objective, and how different choices of $\gamma_1$ and $\gamma_2$ affect performance. - Have the authors experimented with online finetuning after using MISA? It would be nice (although not a huge deal) if the authors could include results for online finetuning after offline RL using MISA, similar to what's done in Section 5.3 of the IQL paper [1]. I'd be interested to see if MISA leads to better online finetuning. - In Table 1, BA often performs better than MISA-$f$ (e.g. halfcheetah-medium-v2, halfcheetah-medium-replay-v2, hopper-medium-replay-v2, walker2d-medium-replay-v2) and sometimes even performs better than MISA (e.g. halfcheetah-medium-v2, halfcheetah-medium-replay-v2) even though BA $\leq$ MISA-$f$ $\leq$ MISA. This would seem to contradict the authors' claim in line 272 that tighter mutual information bounds lead to increased performance. Can the authors provide an explanation as to why this might be happening? [1] Offline Reinforcement Learning with Implicit Q-Learning (Kostrikov et al.) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for recognising the novelty and contribution of our work. We would like to address the concerns below. > it would be nice to show a plot of numerical values of the different mutual information estimates (BA, MISA-$f$, MISA-DV, MISA) to see if the bounds in line 274 hold in practice. We thank the reviewer for the valuable question. Unfortunately, as our bounds are optimized together with the RL objectives, and both policy improvement and policy evaluation steps would affect the final bound. We would further clarify this in our manuscript. > Are the values of $\gamma_1$ and $\gamma_2$ in Equations (12) and (13) specified anywhere? Are these hyperparameters that need to be tuned for each environment? We apologise that we forgot to present the values for these two hyparameters in our manuscript. Across all our experiments, we consistently employed $\gamma_1 = 0.5$ and $\gamma_2 = 5$. We found that these values generally yield favorable results across various tasks and did not perform per-task fine-tuning of these parameters. > I'm curious how the authors trade off maximizing the RL objective and the Mutual Information objective, and how different choices of $\gamma_1$ and $\gamma_2$ affect performance. In fact, we have conducted experiments using different combinations of $\gamma_1$ and $\gamma_2$,and noted that $\gamma_1 \in {0.5, 1}$, and $\gamma_2 \in {3, 5}$ typically yield favorable outcomes. We use $\gamma_1 = 0.5$ and $\gamma_2 = 5$ across all domains after testing them on MuJoCo tasks. For $\gamma_1$, we consider it as a strong regularisation objective as it works directly on the policy, while $\gamma_2$ controls the Q functions, which have an indirect effect over the policy. This might explain why a smaller $\gamma_1$ is better and a larger value is preferred for $\gamma_2$. > Have the authors experimented with online finetuning after using MISA? We haven’t done the experiments yet. Given the limited time during the rebuttal period, we apologise that we might not be able to implement the online training pipeline and rerun the experiments. Nevertheless, we remain confident that MISA's performance would likely improve further with online finetuning. We consider this avenue for future research. > In Table 1, BA often performs better than MISA-$f$ (e.g. halfcheetah-medium-v2, halfcheetah-medium-replay-v2…) We thank the reviewer for this valuable question. We agree that BA in fact outperforms MISA-f on some MuJoCo medium-replay tasks. One hypothesis for this phenomenon is that the medium-replay tasks are constructed by down-sampling the training replay buffer of a medium-level agent, which covers the full exploration process of an online learning agent. Such a data distribution is suitable for an under-constrained agent to learn a decent policy and acquire a better Q function. In comparison, MISA-f, MISA-DV, and MISA are more constrained, learning a lower bound of the true Q function, and as a result, they perform slightly worse than BA. Nevertheless, as BA is under constrained and we chose SAC as our base RL algorithm (which contains an entropy term to encourage the exploration), it would frequently query the out-of-distribution actions and suffer from the Q-value overestimation issue given the data distributions in medium and medium-expert tasks. Thus, BA still achieves a worse overall performance than MISA-f and MISA-DV. We would further clarify this in our future revisions. We hope our answers address your concerns. Please kindly let us know if you have further questions. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my questions! I will keep my score as is.
Summary: This paper integrates two distinct methods in the offline RL domain: the KL regularized method and the conservative Q-learning method. It achieves this by incorporating mutual information in both the value loss function and the policy loss function. To accurately approximate the mutual information between states and actions in the offline dataset, this paper employs the Donsker-Varahdan representation. The experimental results demonstrate the effectiveness of the proposed MISA algorithm and highlight how accurate approximation of mutual information can significantly enhance performance. Strengths: 1. This paper introduces a significant novelty by combining two distinct offline RL algorithms, namely the KL regularized method and conservative Q-learning method. This is achieved by introducing mutual information regularization terms in both the value loss function and policy loss function. 2. The experimental results demonstrate that the proposed MISA algorithm significantly improves performance across various environments. Weaknesses: 1. performance in offline settings, but this may not always be the case. For instance, as derived in this paper, BA is a lower bound for MISA-f and MISA-DV. However, the results presented in Figure 1 do not support this statement, as BA outperforms MISA-f and MISA-DV in most cases within the MuJoCo medium-replay environments. 2. To achieve a better approximation of the mutual information, it is crucial to find $T_\psi (s,a)$ that maximizes the right-hand side of the Donsker-Varadhan representation (as outlined in Lemma 3.2). Providing results using such optimized $T_\psi (s,a)$ would strengthen the validity of this statement. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. [Regarding Weakness 1] Could you please provide an explanation of the results of BA, MISA-f, and MISA-DV in the MuJoCo medium-replay environments? 2. [Regarding Weakness 2] While utilizing $Q_{\phi}(s,a)$ instead of $T_{\psi} (s,a)$ offers benefits in combining the two methods, as you mentioned, achieving a better approximation of the mutual information enhances performance. Could you kindly present the results of optimizing $T_{\psi} (s,a)$ to demonstrate this? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: All limitations are addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for recognising the novelty and contribution of our paper, and giving us valuable suggestions. We would like to clarify the confusion below. > BA outperforms MISA-f and MISA-DV in most cases within the MuJoCo medium-replay environments. We extend our thanks to the reviewer for this insightful question. We agree that BA in fact outperforms both MISA-f and MISA-DV on some MuJoCo medium-replay tasks. A possible explanation for this phenomenon is that the medium-replay tasks are constructed by down-sampling the training replay buffer of a medium-level agent. This buffer covers the comprehensive exploration process of an online learning agent. Consequently, such a data distribution is well-suited for an under-constrained agent to acquire a decent policy and improve its Q function. Conversely, MISA-f, MISA-DV, and MISA exhibit greater constraints, learning a lower bound of the true Q function. Consequently, their performance might slightly lag behind that of BA. However, considering BA's under-constrained nature and our choice of SAC as the underlying RL algorithm (which includes an entropy term to promote exploration), BA tends to query out-of-distribution state-action pairs, resulting in Q-value overestimation issues on the medium and medium-expert tasks. Thus, BA achieves an overall performance significantly worse than that of MISA-f and MISA-DV. We would further clarify this in our updated manuscript. > Could you kindly present the results of optimising $T_\psi(s, a)$ to demonstrate this? We thank you for the valuable question. We run an additional set of experiments on the MuJoCo tasks and present the results in the additional pdf attached in our general response. Theoretically, directly optimising $T_\psi(s, a)$ removes the direct Q value constraint as in Eqn. 12. Its impact on policy will be manifest through the unbiased gradient estimate outlined in Eqn. 15. Nonetheless, our observations reveal that training $T_\psi(s, a)$ leads to decreased stability, causing the agent to falter in many scenarios. Hence, we conclude that the utilization of $Q_\phi(s, a)$ constitutes a superior design choice, and directly regulating Q functions remains pivotal for MISA's stability. We will integrate these outcomes and discussions into our future revisions. We hope this response would clarify your concerns. Please kindly let us know if you have further questions. --- Rebuttal Comment 1.1: Comment: Thanks for the authors’ kind response. I now understand the reason behind BA occasionally outperforming MISA-f, MISA-DV, and MISA, as well as the reason for the lower performance of the method using the maximum $T\_\psi(s,a)$ compared to the proposed method. Consequently, I will raise my score to 6. --- Reply to Comment 1.1.1: Comment: We thank you for your valuable suggestions and insightful discussions! We will incorporate them into our revisions accordingly. Warm regards, Authors
Summary: The authors of this paper introduce a novel framework called MISA, which aims to optimize the lower bound of mutual information between states and actions in the dataset to direct the policy improvement. They provide a theoretical explanation for MISA's superior performance over CQL and empirically demonstrate that MISA attains state-of-the-art results on D4RL when compared to different baselines. Strengths: - The motivation behind this study is logical and sound. - MISA successfully integrates TD3+BC and CQL, and subsequently deduces an improved variant from a theoretical standpoint. - The experiments conducted within this study are extensive and thorough, and MISA achieves SOTA on D4RL. Weaknesses: - It appears that there is a confusion between the true Q function, denoted as $Q$, and the estimated Q function, represented as $\hat{Q}$, in the theoretical derivation provided by the authors. This confusion is evident in equations 5 and 6, where the update rules should have used $\hat{Q}$ instead of $Q$ (as correctly utilized in the CQL paper). Additionally, the Q function should be $\hat{Q}$ in Section 4.3. Therefore, the term $\pi^{*}_{\theta,\phi}\propto \pi_\theta (a|s)e^{Q_\phi(s,a)}$ in Line 199 doesn't hold true since $Q_\phi(s,a)$ should be $\hat{Q}_\phi(s,a )$. This implies that the "Explanation on the Mutual Information Regularizer" is incorrect. I may not have spotted all errors due to time constraints. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - As mentioned in the Weaknesses section, the potential error in the theoretical justification raises concerns. I might consider increasing my rating if this issue is addressed effectively. - In Lines 41-43 of the introduction, the authors state that "though these methods are effective at alleviating the distributional shift problem of the learning policy, the improved policy is unconstrained and might still deviate from the data distribution". Could the authors elaborate on the term "unconstrained"? To my understanding, MISA presents a framework that unifies CQL and TD3+BC and formulates a tighter constraint, which doesn't essentially diverge from the policy constraint of prior methods. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We express our gratitude to the reviewer for acknowledging the novelty and contributions of our work, as well as for providing us with valuable suggestions. We would like to address and provide clarification on the questions as follows. > It appears that there is a confusion between the true $Q$ function, denoted as $Q$, and the estimated $Q$ function, represented as $\hat{Q}$, in the theoretical derivation provided by the authors. This confusion is evident in equations 5 and 6, where the update rules should have used $\hat{Q}$ instead of $Q$ (as correctly utilized in the CQL paper). We extend our appreciation to the reviewer for this insightful question. In our formulation and all derivations, we utilize $Q_\phi$, which denotes an estimated Q function parameterized by $\phi$. This is actually equivalent to $\hat{Q}$ you referred to. We apologize for any confusion caused and will enhance the clarity of our presentation in the revised manuscript. > the term $\pi^*_{\theta, \phi}\propto \pi_\theta(a\mid s)e^{Q_\phi(s, a)}$ in Line 199 doesn't hold true since $Q_\phi(s, a)$ should be $\hat{Q}_{\phi}(s, a)$. This implies that the "Explanation on the Mutual Information Regularizer" is incorrect. As discussed in the prior question, $Q_\phi(s, a)$ already represents an estimated value, thereby not affecting the equation's correctness. When we refer to $\pi^*{\theta, \phi}$, it is derived as a non-parametric closed-form solution for the optimal policy, given the **current estimated $Q\phi(s, a)$**. This draws inspiration from MPO [1] and AWR [2]. By maximizing Eqn. 14, MISA essentially maximizes the log-likelihood of a one-step improved policy considering the dataset. For further details, please consult Eqn. 8 in the MPO paper. We will expound on this clarification in our updated manuscript. > … "though these methods are effective at alleviating the distributional shift problem of the learning policy, the improved policy is unconstrained and might still deviate from the data distribution". Could the authors elaborate on the term "unconstrained"? We express our apologies for the lack of clarity. As discussed in the preceding question, MISA aims to maximize the log-likelihood of an improved policy in view of the dataset. Consider alternative bounds, such as the Barber-Agakov bound employed in TD3+BC. The behavior cloning term $\arg\max_\theta E_{s, a\sim D}[\log \pi_\theta(a\mid s)]$ ensures that the current policy remains proximate to the dataset. However, after one-step gradient descent, the improved policy is not necessarily constrained to be close to the dataset, resulting in an "unconstrained" scenario. In this context, MISA could be viewed as a more cautious and better-constrained objective, which guarantees that the improved policy is close to the dataset. This clarification will be detailed further in our revised manuscript. We hope our answers address your concerns. Please kindly let us know if you have further questions. References: [1] Abdolmaleki, A., Springenberg, J. T., Tassa, Y., Munos, R., Heess, N., & Riedmiller, M. (2018). Maximum a posteriori policy optimisation. arXiv preprint arXiv:1806.06920. [2] Peng, X. B., Kumar, A., Zhang, G., & Levine, S. (2019). Advantage-weighted regression: Simple and scalable off-policy reinforcement learning. arXiv preprint arXiv:1910.00177. --- Rebuttal Comment 1.1: Comment: Dear Reviewer, We sincerely value your questions and appreciate your suggestions for further improvement. Please do not hesitate to let us know if you have any further questions for us to clarify. We eagerly look forward to the opportunity for continued discussions. Warm regards, Authors --- Rebuttal Comment 1.2: Title: Rebuttal response Comment: I appreciate the authors' detailed response, which addresses some of my concerns. However, in relation to the expression $\pi^*_{\theta,\phi}\propto \pi_\theta (a|s)e^{Q_\phi(s,a)}$ in line L199, according to the authors' response, the correct representation is: $\pi^*_{\theta,\phi} \propto \pi_\theta (a|s)e^{\hat{Q}_\phi(s,a)}$ The authors claim this is in line with MPO and AWR. Nevertheless, referring to Equation (3) in Section 3.1 of the AWR paper, it becomes evident that all components utilized, such as $R_{s,a}^\mu$ and $V^\mu(\mathbf{s})=\int_a \mu(\mathbf{a} \mid \mathbf{s}) \mathcal{R}_{\mathbf{s}}^{\mathbf{a}} d \mathbf{a}$, are indeed accurate and not merely estimates. Therefore, in the theoretical analysis, given the inevitable misestimation in $\hat{Q}$, arguing for a "non-parametric closed-form solution for the optimal policy" grounded in an estimated Q-function seems incongruent with the theoretical rigors provided in the AWR paper. This discrepancy requires further clarification or potential correction to maintain the consistency and integrity of the presented methodology. --- Reply to Comment 1.2.1: Comment: We sincerely appreciate the reviewer's response and the insightful discussions. We seek to provide further clarification on this matter. Primarily, as we elaborated in our previous rebuttal, we only imply that $\pi^*_{\theta, \phi}(a\mid s)\propto \pi_\theta(a\mid s)e^{Q_\phi(s, a)}$ is an optimal policy with respect to the **current estimated $Q_\phi(s, a)$**. This suggests that **$\pi^{\theta, \phi}(a\mid s)$ is not the global optimal policy** unless $Q_\phi(s, a)$ is the global optimal $Q$. The correctness of this formulation can be substantiated by referring to Equations 7 and 8 in the MPO paper. Specifically, let's consider a typical RL algorithm that involves alternating between policy evaluation and policy improvement stages. In alignment with MPO's approach, during iteration $i$, the estimated Q value is denoted as $Q_{\theta_i}(s, a)$. Subsequently, within the policy improvement phase, the objective is to improve the current policy with respect to the $Q_{\theta_i}(s, a)$ values. Such a problem is formulated as a constratined optimization problem in MPO: $$\max_q E_{\mu (s)}E_{q(a\mid s)}[Q_{\theta_i}(s, a)], \quad \mbox{s.t.}\quad E_{\mu(s)}[D_{KL}(q(a\mid s), \pi(a\mid s, \theta_i))]< \epsilon.$$ Here, we strictly follow the MPO's notation, where $\mu$ is the state distribution, $q(a\mid s)$ is a variational policy, i.e., the improved policy we aim to obtain, and $\pi(a\mid s, \theta_i)$ is the current policy. The above problem has the following closed-form solution by solving its Langrangian (refer to the Eqn. 8 of MPO): $$q_i(a\mid s)\propto \pi(a\mid s, \theta_i)e^{Q_{\theta_i}(s, a) / \eta^*}$$ where $\eta^*$ is a normalizing factor obtained by solving another convex dual function. Note that throughout the derivations of MPO, only the estimated $Q_{\theta_i}(s, a)$ is involved, rather than the global optimal Q. The optimality claim in our paper and the previous rebuttal is actually with regards to the above constrained optimization. We believe this is sufficient to support the correctness of our derivations. ---- Also, we can derive this from the constrained policy search objective, following the AWR paper suggested by the reviewer. Referring to Section 3.1 in the AWR paper, our primary aim is to identify a policy that maximizes the expected improvement $\eta(\pi) = J(\pi) - J(\mu)$, where $\mu(a\mid s)$ is a sampling distribution. Instead of expanding $\eta (\pi) = A^\mu (s, a) = R_{s, a}^\mu - V^\mu (s)$ as in the AWR paper, let’s replace $R_{s, a}^\mu - V^\mu (s)$ with a function $f (s, a)$ for simplicity. In this case, we have our objective now as $$E_{s\sim d_\pi(s)}E_{a\sim \pi (a\mid s)}[f (s, a)]$$ This equation suggests that our objective is to obtain **an optimal policy $\pi^{*}$, such that the $f(s, a)$ can be maximized under the expectation of** $E_{s\sim d_\pi(s)}E_{a\sim \pi (a\mid s)}[f (s, a)]$. Next, becaues the above objective is hard to optimize due to the dependency between $d_\pi(s)$ and $\pi$, the AWR paper suggests to solve the following constrained policy search problem (Eqn. 5 and 6 in AWR). $$\arg\max_\pi \int_s d_\mu(s)\int_a \pi(a\mid s)f(s, a)dads, \quad\mbox{s.t.}\quad \int_s d_\mu (s)D_{KL}(\pi (\cdot\mid s)\parallel \mu(\cdot\mid s)) \leq \epsilon$$ Furthermore, by solving the Lagrangian of this problem, we can derive a closed-form solution as presented in Eqn. 8 of the AWR paper: $$\pi^*(a\mid s) = \frac{1}{Z(s)}\mu(a\mid s)e^{f(s, a)/\beta}$$ The equation above indicates that we have successfully derived a policy $\pi^*(a\mid s)$ that maximizes the expected value of the function $f(s, a)$ while satisfying the constraint $\int_s d_\mu (s)D_{KL}(\pi (\cdot\mid s)\parallel \mu(\cdot\mid s)) \leq \epsilon$. However, this derivation **does not impose any specific requirements on** $f(s, a)$ **as it solely involves solving the Lagrangian of a constrained optimization problem.** Consequently, if we choose $f(s, a) = Q_\phi(s, a)$, the estimated Q value function, this choice remains valid in a sense of maximizing the expected value of $Q_\phi(s, a)$ while satisfying the constraints. ---- We thank the reviewer again for the useful discussions and we will definitely improve our presentation of this statement in our revisions to avoid any potential confusions. Please kindly let us know if you have any further questions.
Rebuttal 1: Rebuttal: We express our sincere gratitude to all the reviewers for acknowledging the novelty and contributions of our work, and for providing valuable questions for discussion along with constructive suggestions. We wish to draw your attention to the additional experiment results, as suggested by Reviewer 2yR3 and Reviewer XdYK. These encompass: - The standard deviations of our main results. - Employing MISA with the true value of $T_\psi(s, a)$ for mutual information estimation, rather than utilizing $Q_\phi(s, a)$. You can find the detailed results in the attached PDF. These results offer insights as follows: - MISA remains relatively stable across different tasks. - When utilizing the true $T_\psi(s, a)$, MISA-T exhibits strong performance on certain tasks, such as halfcheetah-medium-v2 and halfcheetah-medium-replay-v2, surpassing our original MISA. However, it does exhibit less stability and performs poorly on other tasks. Ultimately, MISA-T's performance is inferior to that of our original MISA, highlighting the significance of directly regularizing the Q functions through mutual information estimation. Nonetheless, we express our gratitude to the reviewers for their valuable suggestions, and we view this as a prospective avenue for enhancing MISA in the future. In addition to these aspects, we have addressed individual questions in our responses to the corresponding reviewers. We will incorporate all of the discussions and results into our revisions. At the same time, we will refine our presentation and clarity to address the raised concerns comprehensively. Please kindly let us know if you have further questions. Sincerely, Authors Pdf: /pdf/1a835bb325379e637cf0d036fa01de427300e2ce.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
A Diffusion-Model of Joint Interactive Navigation
Accept (poster)
Summary: The paper deals with the problem of generating vehicle trajectories at the scene level conditioned on a map and some known observations (e.g. past or future states). They propose a diffusion model based on SceneTransformer that is trained with observation conditionings with random masking to reflect multiple desired downstream applications including prediction, goal conditioning, and imputation. Experiments show reasonable results on trajectory forecasting on both Argoverse and INTERACTION datasets, and demonstrate the flexibility of the model to several tasks including generating scenes based on high-level concepts (cut-in) and scenario editing. Strengths: While no technical component of DJINN is particularly novel, the paper presents a nice comprehensive exploration of using diffusion for traffic scenario modeling. This includes all different types of masking/conditioning, classifier-free guidance, classifier guidance, and scene editing with SDEdit. Sec 3.2 is a succinct, yet informative, introduction to diffusion which will be good even for the unfamiliar reader in the driving community. In general, the paper is relatively self-contained, well-written, and easy to follow. The conditioning approach makes the model very flexible to many different kinds of observations that enable use in several important downstream applications. The paper goes beyond showing the usual motion forecasting applications (although it shows reasonable results on this too). The use of classifier guidance in Sec 6.3 to allow creating scenarios with more abstract specifications is a cool proof-of-concept that could be really useful for AV testing. Similarly for the scene editing application in Sec 6.4. Weaknesses: In some cases, it looks like conditioning the model on input observations is not strong enough to be consistent with the observations. E.g. Fig 1 bottom row for Goal Conditioning, Up-Sampling, and Imputation the generated parts of trajectories in orange do not always align well with observations in blue, i.e. there are jumps or discontinuities. It may be necessary to use test-time guidance (as in CTG [52] and [Trace and Pace, Rempe et al., CVPR 2023]) to better enforce consistency. Also, it would be good to have an evaluation to quantify this inconsistency. E.g., for up-sampling and imputation the generated portions could be compared to ground truth, or for goal conditioning how close the vehicle gets to the goal. Some details of the masking procedure to allow this conditioning were also not clear. Does the model only output predictions for timesteps which there were no observations? Or does it generate a full trajectory and only the unmasked portion is visualized in the figures? If the latter, is the masked portion (i.e. the part of the trajectory given as input) supervised during training? The classifier-free guidance (CFG) formulation was also a bit confusing. The difference between $x_{obs}$ and $x_{cond}$ in Sec 5.1 was not apparent until later in Sec 6.2. Usually CFG operates by dropping the entire conditioning from the model in the right-hand term of Eqn 6, so I’m wondering why only some of the conditioning (e.g. the goal state in Sec 6.2) is arbitrarily dropped in the proposed formulation? In this example (and in others like motion forecasting), it seems the past conditioning should also be dropped. I also think on L275-276 it may be more accurate to say that the CFG controls the emphasis of the conditioning to the model instead of controlling the “spread of agents’ trajectories”: the reason the spread reduces may be because the goal position conditioning is emphasized so more samples are guided closer to this goal. The paper and supplement currently have only a few qualitative results, making it hard to comprehensively evaluate trajectory quality. Given the range of applications that the masking enables, it would be great to show several examples from each of these applications. Videos would be ideal, but at least trajectories should be colored with a gradient over time or something rather than a solid color over the whole trajectory. The observation distribution is detailed in Appendix B, but I’m curious about how this distribution was determined and how much it affects the final capabilities of the model? It seems balancing these probabilities would be very important to ensure good results, so I’m wondering does the model do significantly worse when trained on all tasks jointly than trained on one individually? One of the most useful applications of the proposed method could be in simulating scenarios for AV testing, but the current generated trajectories are quite short (at most 5 sec) and limits the kind of scenarios. I’m interested to see if the model can handle longer generated scenarios (~10 sec), and if the trajectories maintain realism using metrics like off-road and collision rate. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Overall, the paper is a nice exploration and demonstration of how diffusion can be applied to traffic scenario modeling, and I tend to think it would be a good contribution to the community. But in the rebuttal, it would be really good to see more qualitative results and for the authors to clear up some of my confusion on the masking and classifier-free guidance procedures. I had a few questions in Sec 4 that didn’t weight heavily in my rating: * Eqn 5: why is the loss on $x$ and not $x_0$? Is the entire trajectory supervised (observed + generated) or just the non-observed parts? * L186: why are the agent states called “latent”? Are they already embeddings of some kind? * It’s interesting that deterministic sampling is better quality than stochastic since this is not the case in other domains – any intuition here? ============== After Rebuttal =================== After the rebuttal and discussion, I have decided to raise my score to Accept. In the rebuttal, the authors addressed most of my concerns, clarified various technical points, and added more qualitative results that demonstrate the variety of applications. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are discussed in Sec 7. It might be good to mention some limitations/failures specific to DJINN and not diffusion in general, e.g., mention that the model is not always realistically consistent with conditioning observations? Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Consistency with input observations: We agree that in some examples, DJINN produces samples which are inconsistent across time. We also agree that these inconsistencies may be reduced by utilizing test time guidance. Specifically, classifier-free guidance as outlined in section 5.1 and demonstrated in section 6.2 might be used to improve the quality of these samples. In addition, iterative classifier guidance proposed by CTG is directly compatible with the classifier guidance outlined in section 5.2 and is a promising avenue for future work. We propose including a supplementary figure showing the effect of classifier-free guidance over a variety of guidance weights on the samples from Figure 1 in the final draft. While “Trace and Pace” [Rempe et al., CVPR 2023] is concurrent work by the NeurIPS standards and focuses on pedestrians instead of vehicle motion modeling, we believe it adds additional context for the reader and therefore will add it to our related work section. **Masking Procedure**: DJINN only produces predictions for agent states which are not observed. In Figures 1 & 2, we visualize the generated, unobserved states in orange and the observed states, which are conditional input to the model, in blue. **Classifier Free Guidance Formalism**: Thank you for identifying that the distinction between $x_{obs}$ and $x_{cond}$ in section 5.1 is unclear. We will modify the introduction of $x_{cond}$ to include a statement indicating that $x_{cond}$ could be a future goal state to improve the clarity. **Additional Conditional inputs in CFG**: Regarding your question about our classifier free guidance formulation, we agree that in many applications of classifier-free guidance (for example class conditioned image generation) a completely unconditional score estimate is combined with the conditional score estimate. From [13]: $ \nabla \log p(\mathbf{x}_t | y) = (1-w) \nabla \log p(\mathbf{x}_t) - w \nabla \log p(\mathbf{x}_t | y)$ Where x is the diffusion variable, y are conditional inputs and w is the guidance weight. However, it is also valid to add additional conditioning to every term: $ \nabla \log p(\mathbf{x}_t | y, z) = (1-w) \nabla \log p(\mathbf{x}_t | z) - w \nabla \log p(\mathbf{x}_t | y, z)$ In section 5.1, we define $y$ as $\mathbf{x}_{cond}$. Here, $z$ is all the other conditional information provided to the model, which in our application includes the map as well as other observed agent states. Our removal of specific agent states during guidance is therefore not arbitrary but instead an intentional design choice which follows the conventions of classifier-free guidance with added conditional information. **Phrasing of guidance strength**: We agree with the reviewer’s suggestion about the phrasing of line 275-276 and will update the final manuscript with this modification. **Additional Qualitative Results**: We have supplied some additional composite images showing scenario samples using a variety of observation masks. We agree that gradients on the qualitative result figures might improve clarity, and will update our figures with this modification. **Impact of the Observation Distribution**: The mixture weights for the various tasks in the observation distribution were chosen empirically. However, we have included an additional experiment in our rebuttal investigating this choice. | Model | Ego minADE | Ego minFDE | Scene minADE | Scene minFDE | Mean MFD | |--------------------------|-----------:|-----------:|-------------:|-------------:|---------:| | Predictive Only | 0.21 | 0.49 | 0.35 | 0.91 | 2.33 | | Observation Distribution | 0.26 | 0.63 | 0.45 | 1.17 | 3.11 | The table above shows the difference in performance on the INTERACTION dataset between a model trained on only the predictive task and one trained on the full observation mask distribution as measured using 6 trajectory random samples. As expected, we find the model which is optimized solely on the predictive task performs better on that task than the model which is trained on a wider distribution of tasks. We expect that scaling the network size of DJINN would likely reduce this difference. **Scaling**: We have only trained our model on datasets with scenarios of up to 5 seconds in length, as this is what is commonly used in challenges related to these datasets. We agree that generating longer scenarios is an important avenue of future work, but it is beyond the scope of this work. Therefore, in the final draft of our paper, we will discuss increasing the length of generated scenarios as an avenue of future research. **Eqn 5**: The loss we optimize is squared L2 error between the output of D_\theta and the noiseless unobserved states in the scene. We agree that the notation of equation 5 could be improved. In the final version of our paper, we will clarify the notation used in this equation. **L186**: We use the term “latent” as a synonym for “unobserved”, following the graphical model convention in which variables can be either latent or observed. We will update the text to improve clarity surrounding this point. **Deterministic vs Stochastic Sampling**: We note that EDM [21] also found that deterministic sampling performed better than stochastic sampling in their experiments. Since we utilize their diffusion parameterization, we are not surprised that deterministic sampling performs better in our experiments also. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I thank the authors for their detailed response to my questions and concerns. Most of my issues have been resolved or clarified, and I intend to keep my rating of accept. However, for additional qualitative results I do not see an attached pdf to the individual response or the top-level response. Is it still possible to upload this? --- Reply to Comment 1.1.1: Title: Response to reviewer wJNr Comment: Hello reviewer wJNr, We are puzzled by the apparent disappearance of the additional qualitative results from our top-level response. When we read your rebuttal, we were also unable to locate the attached pdf. However, now we are able to see the qualitative results pdf again. It seems like there was some sort of issue with OpenReview where the pdfs were not visible. We apologize for the inconvenience. If you still are unable to see the qualitative response pdf please let us know and we will send a message to the AC/PC.
Summary: The paper proposes a generative model for producing synthetic traffic scenarios. The proposed method uses a diffusion model and learns to predict multiple agent trajectories jointly in a scene. The main contribution to previous methods is the increased flexibility of conditioning which can use different observation masks (corresponding to different use-cases), goal/non-goal, and other vehicle properties. The method is evaluated on two publicly available datasets and shows competitive performance to its deterministic counterparts. Strengths: DJINN seems like a natural step forward for the multi-agent trajectory prediction task. In theory, the generative framework of diffusion models should allow for better sampling of the data distribution and produce traffic scenarios with larger variability that previous methods such as Scene Transformer [30] and TrafficSim [44]. This is important because datasets with traffic scenarios are highly imbalanced, containing very few safety critical events which are usually more interesting. The paper illustrates this flexibility through several interesting qualitative examples of test-time conditioning. Furthermore, the paper is well written, providing clear motivation, problem formulation, and background to the reader. Weaknesses: Quantitative results in Tables 1,2 show comparable results to [30] in terms of forecasting performance but there is no quantitative experiment demonstrating the increased variability of generated traffic scenarios from the proposed approach. While this is shown qualitatively in sections 6.2, 6.3, and 6.4, and in my understanding [30] does not have the capability of test-time conditioning, I think a quantitative comparison (perhaps estimating variance over waypoint predictions) would still be useful. I would like to hear the authors' thoughts on this. The authors mention that the choice of the reference frame for the agent states is important. Perhaps an ablation study would provide more insight. What is the effect of using the observation masks during training besides defining different use-cases? How would the capability of the model to produce traffic scenarios with large variability be affected if only the "Predictive" mask was used? Please include a runtime comparison to baseline methods. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: See weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The paper did not address the limitations and potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to consider our work, and for providing positive, and constructive feedback. We address their concerns below: **Demonstration of increased variability**: We agree that such a comparison should be considered in the evaluation of our model in comparison to the baseline. One possible metric we could consider is the maximum final discrepancy (MFD) [38], which we have used in the “Effect of observation mask” subsection of this rebuttal to measure the variation of generated trajectories. If the reviewer believes that the inclusion of this evaluation would improve the clarity and impact of our work, it can be included in the final version of the paper. **Test-time conditioning**: We agree with the reviewer that surfacing rare events such as safety critical incidents is challenging with imbalanced driving data. The test time conditioning introduced in DJINN is one answer to this problem. Specifically, using classifiers of rare driving behavior and editing scenarios are two approaches introduced which allow sampling from conditional distributions of interest which are not available in prior methods. **Reference Frame**: Although the choice of reference frame is a key design choice, it is not easily modified without significant changes to the model architecture. This is because DJINN is specifically designed for a joint global frame, making extensions to an egocentric frame difficult . Therefore, we argue that such ego-centric prediction is outside the scope of this work, though we agree that it is a promising avenue for future research. **Effect of observation masks**: As the reviewer mentions, the primary reason for training over different observation masks is to define different use-cases at test time. To measure the impact of the observation mask distribution, we have compared the diversity of samples from a model trained with the “predictive” mask to those produced by a model trained on the full observation mask distribution on the INTERACTION validation set. We utilize the maximum final discrepancy as our diversity metric [38] which measures the maximum distance between all pairs of trajectories in a trajectory set. | Model | Mean MFD | |--------------------------|---------:| | Predictive Only | 2.33 | | Observation Distribution |3.11 | From our experiment, we note that training over the observation mask distribution increases the diversity of trajectories produced by DJINN. However, we reiterate that the purpose of the observation distribution is primarily to enable test time guidance using classifier-free guidance and conditional generation of trajectories using arbitrary observation masks. **Runtime**: We have attached a comparison of the runtime of DJINN with [30] which will be added to the final supplementary. We note that the runtime can be lowered at the expense of performance as demonstrated in the rebuttal to reviewer 1Qe5, and that distillation is a promising avenue of future work to lower the computational cost of diffusion models. | Number of Agents | Scene Transformer | DJINN - 25 Steps | DJINN - 50 Steps | |-----------------:|------------------:|-----------------:|-----------------:| | 8 | 0.0126s | 0.574s | 1.149s | | 16 | 0.014s | 0.611s | 1.238s | | 32 | 0.017s | 0.844s | 1.693s | | 64 | 0.026s | 1.404s | 2.891s | --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their comprehensive answers to my comments. I understand that the observation distribution is primarily for test time guidance, but I would suggest including the MFD results as they demonstrate 1) that your model does not collapse to a mean solution, and 2) that training over the observation mask distribution does indeed produce paths with larger variability which is desirable in this task. After reading the other reviews and author rebuttals I am keen on keeping my original recommendation of acceptance.
Summary: The paper proposes a diffusion model-based method for generating joint or conditioned predictions of traffic agents. The proposed method combines the EDM diffusion and SceneTransformer model architecture and provides different guidance for conditional predictions. The experiment results indicate comparable results of the proposed results with SOTA. Strengths: The paper is well-written and the proposed approach gives readers very useful information about how the two SOTA methods (EDM and SceneTransformer) could be combined and its potential performance. Weaknesses: In my understanding, the novelty of the paper is mostly on the combination of existing methods, which is OK but not outstanding. From the experiment results, the proposed method doesn't show clear improvements compared with other methods, which is a bit surprising given the combination of two powerful approaches. It would be great if the author could give more analysis and explanations on why this is the case. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: It would be great if the author could give more analysis and explanations on why the performance of the proposed method doesn't show clear improvements. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: As mentioned in the paper, the long inference time limits the method to be more useful in practice. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you to reviewer TiFK for their thoughtful review of our work and their praise of our paper’s clarity. Below, we have responded to the two areas of weakness which the reviewer has highlighted. **Novelty**: In regards to the novelty of our approach, we accept that our method combines the training and inference structure of EDM [21], while using a model backbone similar to Scene Transformer [30]. However we emphasize that in combining these two approaches we enable capabilities which are not present independently in either work. DJINN allows increased control over traffic scene generation through guidance and scenario editing. Specifically, sampling traffic scenes using arbitrary trajectory classes and editing scenarios are both capabilities which are not available in either work. **Performance**: We argue that the main contribution of our approach is not the predictive power of DJINN as a trajectory forecasting model, but the ability to sample traffic scenes from a wide variety of conditional distributions through guidance and editing techniques as outlined in section 5 and demonstrated in experiment sections 6.2-6.4. Regarding raw performance, we first highlight that DJINN is constructed to perform joint trajectory prediction and not marginal trajectory prediction. In joint trajectory forecasting, DJINN outperforms a comparable baseline as is shown in Table 2, section 6.1. When comparing our joint model against marginal methods in Table 1, we highlight that the standard trajectory forecasting metrics, minFDE and minADE, measure the minimum error over a small set of predicted trajectories. We argue that these metrics implicitly make a strong modeling assumption that the true distribution of driving behavior follows a finite component mixture distribution. The non-generative trajectory forecasting baselines considered in our work incorporate this assumption by directly modeling discrete, deterministic trajectory sets. These sets contain the average driving behavior of each component in the mixture, but might not capture the true driving distribution. Alternatively, DJINN is trained to stochastically produce traffic scenarios which match the data distribution by minimizing a weighted variational bound with weaker assumptions about the structure of the underlying data distribution. We argue that the slightly worse performance of DJINN relative to these models stems from the shared implicit assumption made by common evaluation metrics like minFDE and minADE and the baseline models we compare against. Utilizing the post-processing described in section 6.1 to approximate DJINN’s learned distribution with a six component Gaussian mixture results in reasonable forecasting performance under these metrics. However, we hypothesize that more direct minimization of these metrics through the shared mixture assumption described above is the primary reason for the performance discrepancy.
Summary: DJINN (Diffusion-based Joint Interaction Network) is an innovative generative model that creates, edits, and forecasts multi-vehicle traffic scenarios in a stochastic manner. Leveraging diffusion models, DJINN also addresses the challenge of generating traffic scenes conditioned on a flexible configuration for the observation space. This flexibility of the model makes it well-suited for generating safety-critical events and out-of-distribution scenarios, which are extremely relevant for evaluating the performance of autonomous vehicles. Furthermore, by jointly diffusing the trajectories of all agents and conditioning them to customizable observation windows, the authors provide a fresh and innovative way of forecasting multi-vehicle traffic scenarios. The authors validate the efficacy of DJINN by benchmarking against state-of-the-art trajectory forecasting methods using the popular Argoverse and INTERACTION datasets. The results show that DJINN outperforms existing models like Scene Transformer in joint motion forecasting metrics, demonstrating its promise for contributing significantly to the development and safety testing of autonomous vehicles. Moreover, DJINN’s flexibility is demonstrated through its successful generation of goal-directed samples, examples of cut-in driving behaviors, and editing replay logs. Strengths: Originality: The paper represents a highly original contribution to the field of autonomous vehicle simulation. The use of a diffusion model for generating joint traffic scenarios is innovative, and appears to be a novel application of this technique. The authors move beyond the conventional deterministic sets of trajectory forecasts, proposing a generative model to forecast joint future motion. DJINN's ability to draw traffic scenarios from a variety of conditional distributions further attests to its innovative design. The model also stands out in its ability to provide flexibility in terms of test-time diffusion guidance, offering a new perspective on traffic scenario simulation. Quality: The overall quality of the research is high. The paper presents a sound background and solid related works section, creating a strong foundation for their argument. The authors have clearly demonstrated their understanding of the topic, using relevant and current research. Their methodology is rigorous, and the experiments performed on the Argoverse and INTERACTION datasets are thorough and well-executed. The results obtained are credible and validate the authors' claims about DJINN's performance and flexibility. Clarity: The paper is well-written and clear. It efficiently communicates the problems associated with the simulation of autonomous vehicle systems and convincingly presents DJINN as a potential solution. The structure of the paper is logical, and the flow of ideas is coherent, making it easy for readers to follow the authors' argument. Significance: The significance of this paper is recognizable. Simulating diverse, safety-critical and stochastic traffic scenarios is of paramount importance in the field of autonomous vehicles. Its flexibility in controlling the conditioning of traffic scenes, coupled with its excellent performance on trajectory forecasting, makes it a valuable tool for researchers and developers in the field. Given the increasing interest in autonomous vehicles, this work is likely to have a high impact on both academia and industry. Weaknesses: While the paper represents a significant contribution to the field, there are a few areas where the authors could improve upon in the future iterations of their work: - Detailed Analysis of Limitations: While the conclusion does mention certain limitations, the paper would benefit from a more detailed and separate section dedicated to the limitations of the proposed method. A thorough examination of the constraints and failure cases of DJINN could offer readers a more balanced perspective and help to guide future research efforts. - Future Directions: The authors might also consider providing clearer guidance on the future direction of the work. While DJINN has shown promise in its current state, discussing potential extensions and enhancements would be valuable. For instance, how might DJINN be adapted to handle more complex scenarios, or different action spaces and robotic domain? - Qualitative Analysis of Failure Cases: Alongside quantitative performance metrics, a qualitative analysis of failure cases would provide more comprehensive insights into the model's performance. Discussing specific scenarios where DJINN fails to generate accurate traffic scenes would offer readers a better understanding of its potential weaknesses and areas for improvement. - More Sample Traffic Scenes: The paper could also benefit from a larger set of sample traffic scenes, either included in an appendix or made available online. These examples would provide a more tangible sense of the model's capabilities. Video demonstrations would be particularly effective, allowing readers to visually appreciate the flexibility and realism of the scenes generated by DJINN. - Ablation Studies: The paper could also greatly benefit from conducting ablation studies to understand the influence of different components or variations of DJINN's model architecture on its overall performance. A detailed ablation analysis can help to pinpoint which elements of the model contribute most to its successful generation of traffic scenarios. This could include investigations into the impact of the diffusion process, the role of different conditional distributions, or the effect of different types of state observations. Understanding these aspects in more detail would not only provide further insights into DJINN's performance and functioning, but also guide future optimization and refinement of the model. Minor comments: Unlinked reference at line 88. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - How might DJINN be adapted to handle more complex scenarios, higher action spaces or other robotic domain? - Given that the inference time of DJINN it relatively slow, due to its diffusion structure and iterative estimation of the score function. Do you have any quantitative results on the tradeoff between computing time, performance and variation of network? (number of layer, depth, number of diffusion steps, etc) - How suitable would the model, as it is, be for a model of predictive control setting and real life deployment? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: While the authors acknowledge some limitations of DJINN in the conclusion, particularly the relatively slow inference time, a more comprehensive and explicit discussion of these drawbacks would enhance the overall balance and depth of the paper. Moreover, this section could provide a more explicit roadmap for future research directions. While the paper showcases DJINN's potential, there is room to explore its applicability in other contexts, or the implications of further improving its inference speed. Articulating these directions can inspire and guide subsequent work in this line of research. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank reviewer 1Qe5 for taking the time to consider our work, as well as for their overwhelmingly positive and constructive feedback. We consider the areas which the reviewer believes that our work can be further improved below: **Detailed Analysis of Limitations**: We agree that additional discussion of the limitations could improve the clarity and impact of our work. In order to address the reviewer's comment, we plan to include an additional ablation study in the final draft of the paper which we feel will provide the reader with a more intuitive understanding of our model and its limitations. Specifically we will look at how DJINN’s run-time scales under increased agents and scenario lengths. This is a well known shortcoming of transformer models, which by extension also affects DJINN itself. **Future Directions**: We note that a small discussion of future research areas is included in section 7 (line 321), however in the final draft we will further elaborate on these directions, including many of the useful suggestions provided by the reviewers. **Qualitative Analysis of Failure Cases**: We agree that consideration of failure cases can often provide insight into what the limitations of the model might be in practice. However during our experiments we found that failure cases were most often the result of poor dataset annotations, and not any fundamental underlying problem with our modeling approach. **More Sample Traffic Scenes**: We thank the reviewer for this suggestion, which we agree would enhance the quality of our work. We have attached a pdf containing an additional figure in our rebuttal showing additional qualitative results for our method. This figure provides samples using the predictive, goal conditioned, agent reactive, and up-sampling tasks outlined in Figure 1 of the main text. If accepted, we will add this figure to the supplementary. Additionally, in the final draft we can include a link to a paper web-page including video examples of our method. **Ablation Studies**: In order to address the reviewer’s concerns, we have included the following ablation result regarding the effect of varying number of diffusion steps on the quality of generated traffic scenarios. We evaluate our INTERACTION model using a variety of diffusion steps, evaluating metrics over a 6 trajectory samples per scene, and refrain from fitting a Gaussian mixture model. | Diffusion Steps | Ego minADE | Ego minFDE | Scene minADE | Scene minFDE | |----------------:|-----------:|-----------:|-------------:|-------------:| | 10 | 0.28 | 0.64 | 0.45 | 1.135 | | 20 | 0.22 | 0.51 | 0.37 | 0.95 | | 30 | 0.22 | 0.50 | 0.36 | 0.92 | | 40 | 0.21 | 0.50 | 0.35 | 0.92 | | 50 | **0.21** | **0.49** | **0.35** | **0.91** | Reducing the number of diffusion steps decreases the predictive accuracy of the model, but improves the runtime, allowing the user to trade performance for speed. The other ablation which was requested in the questions section of the review regarding the varying of model layer depth will be included in the appendix of the final draft of the paper. **More complex scenarios**: We note that because DJINN is based on a transformer model, we expect that with larger datasets, the model's performance would continue to scale desirably. This property has been well studied in other transformer networks [Kaplan and Mcandlish, 2020]. While adapting DJINN to more complex robotics domains represents an interesting future direction, it is beyond the scope of this work. DJINN is tailored specifically to autonomous driving, and while the diffusion framework can be adapted to arbitrary state spaces, determining how best to learn in such state spaces would require additional consideration. We will add this as a suggestion for future work into our final manuscript. Computing time, performance and variation of network: As is discussed in Ablation Studies, we have included an experiment which considers the effects of varying the number of diffusion steps. In our response to reviewer 1Rcd, we have also measured the runtime of our method over varying numbers of agents and diffusion steps. An experiment which varies the network structure will be included in the final draft of the paper. **Suitability in MPC**: DJINN could be adapted to a multi-agent, model predictive control setting (MPC), where a local dynamics model is known but the external agents’ models are unknown. MPC could proceed by taking an action, stepping the local dynamics model to determine the next egocentric state, then using DJINN to predict the states of all external agents in the scene. Such a method could function as a basic MPC algorithm, or could even be used for interactive simulation and driving policy evaluation. Additionally, we hypothesize that MPC might be useful in reducing collisions in generated scenarios, but evaluation of this hypothesis is beyond the scope of our current work. We agree that investigation into how generative models like DJINN might be applied to real-world deployment represents an important research direction as autonomous vehicle technologies continue to evolve. However, considering the difficult research questions surrounding closing the sim2real gap and diffusion model runtime, DJINN would be unlikely to be suitable for onboard, real-time deployment. That said, we will add a discussion of these research questions to our final manuscript.
Rebuttal 1: Rebuttal: We would like to thank all of the reviewers for their time in considering our work and their thoughtful comments which we believe will help to improve the quality of our submission. In response to requests for additional qualitative results, we have provided an attached pdf which contains a composite image of multiple traffic scenes generated by DJINN under a variety of observation masks. Pdf: /pdf/287c1a0399d710875d2b522151a3ace2551591de.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Compressed Video Prompt Tuning
Accept (poster)
Summary: This work studies the video classification task in the compressed video. With the motion vector and residual as the prompt, this work proposes selective cross-model complementary prompter idea to enhance the cross-model interactions, achieving promising results while maintaining a small number of trainable parameters. Strengths: Most parts of this paper are well written which clearly demonstrates its motivation and methodology. Especially, the method of this paper is easy to follow. The idea of using motion vector and residual in the compressed video as prompts is inspiring. These two modalities are computationally free to access, which is a significant advantage compared to traditional motion cues, such as optical flow. From the experimental results, the proposed CVPT achieves promising results while keeping the tunable parameters low. Visualization is a plus. Weaknesses: The evaluation benchmarks (UCF, HMDB, and SSv2) are all small scaled. One concern is the scalability of the proposed method. The design of some key components is not well justified and the figure 2(right) is confusing. For example: 1. Will the prompt join the multi-head self-attention (MHSA)? If so, why the frozen MHSA is able to leverage prompt feature from other modalities. 2. In the SCCP, are the input prompts directly from the previous SCCP other than MHSA? What’s the motivation and effectiveness evidence of keep using the SCCP-processed prompts in SCCP of different layers? 3. In L_2, there is a closed loop for the I-frames embedding. What is the connection to the previous layer. 4. Not sure what are summed together by using the adding function in L_1 and L_2. The experiments set-up in the ablation study is not clear enough. In the Table 3, CPR without refinement and CPR are both activated in the last row, which is confusing. Also, the linear probe is used as the baseline, but some implementation details are missing, for example: the number of input frames, how prompts attend the self-attention, and the architecture of classifier. In Table 5, what is the implementation details of fully fine-tune? Ln 45 suggests the large parameter storage burden from previous methods. However, there is no related comparison in the following analysis. In addition to the trainable parameter, the throughput (samples/second), computational burden (GFLOPs), and memory cost (GB) are important metrics. Minor: Ln 131 The resulting -> the resulting There is no notation for g_4 to g_6 in Eq 7 Table 4, E_R -> P_R Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: From the Ln124-125, what are the challenges, is there any examples? And is there any evidence to show the inconsistencies between upstream and downstream data? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The scalability may be one of the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: W1:The evaluation benchmarks (UCF, HMDB, and SSv2) are all small scaled. One concern is the scalability of the proposed method. A1:In fact, SSv2 is one of the largest datasets used in compressed video, containing a substantial **193,690** videos. Our approach also shows performance improvement on SSv2, illustrating its good scalability. W2:The design of some key components is not well justified and the figure 2(right) is confusing. For example: 1. Will the prompt join the multi-head self-attention (MHSA)? If so, why the frozen MHSA is able to leverage prompt feature from other modalities. 2. In the SCCP, are the input prompts directly from the previous SCCP other than MHSA? What’s the motivation and effectiveness evidence of keep using the SCCP-processed prompts in SCCP of different layers? 3. In L_2, there is a closed loop for the I-frames embedding. What is the connection to the previous layer. 4. Not sure what are summed together by using the adding function in L_1 and L_2. A2.1:Prompt will join the MHSA. Despite the frozen MHSA parameters, the capacity to tune both the input visual token and prompts remains intact. Furthermore, our SCCP module effectively functions as modality alignment. This is achieved through the simultaneous integration of improved motion cues into the I-frame token and the concurrent update of the prompt. A2.2: The SCCP module can dynamically update prompts within each layer to refine the model-tuning process. This strategy is also embraced by the VPT, where new prompts are inserted into each layer. We conducted a comparative experiment, revealing that our strategy yields superior performance in contrast to the alternative method that relies on prompts from the preceding MHSA layer. |Method|Acc.| |:-|:-:| |prompts from MHSA|83.7| |Ours|87.6| A2.3:The I-frame token is sourced from the output of the preceding layer's model output. A2.4:The adding function employed signifies the summation of $E_I$ from the SCCP output and I-frame tokens from the backbone. W3:The experiments set-up in the ablation study is not clear enough. In Table 3, CPR without refinement and CPR are both activated in the last row, which is confusing. Also, the linear probe is used as the baseline, but some implementation details are missing, for example: the number of input frames, how prompts attend the self-attention, and the architecture of classifier. In Table 5, what is the implementation details of fully fine-tune? A3.1:Indeed, we will revise Table 3 to make it more clear. A3.2:In the linear probe configuration, an equivalent input to the prompt setup is used, encompassing 4 I-frames and 12 P-frames. A uniform embedding function is applied to encode all three modalities. The backbone is pre-trained ViT on the Kinetics-400 with raw videos, and the classification layer is a simple MLP layer that maps the latent features into classification categories. Subsequently, the concatenated tokens across modalities are input into the network. Similar to prior studies [15, 19], only the classification layer is involved in training within the linear probe framework. A3.3:In the full fine-tuning setup presented in Table 5, the same inputs utilized as in our approach are maintained, consisting of 4 I-frames and 12 P-frames. Distinctive embedding functions are applied to encode the various modalities. The backbone is either ViT or Swin under different pre-training manners. All network parameters are trainable within this configuration. W4:Ln 45 suggests the large parameter storage burden from previous methods. However, there is no related comparison in the following analysis. In addition to the trainable parameter, the throughput (samples/second), computational burden (GFLOPs), and memory cost (GB) are important metrics. A4:We make a comprehensive comparison with the raw video method, VideoMAE, in the table below. This comparison effectively demonstrates the superior efficiency of our approach in various aspects. Notably, our method excels in rapid inference by bypassing the need for video decoding. Furthermore, our utilization of compact compressed video inputs contributes to reduced GFLOPs, making our method computationally lighter. Moreover, our innovative prompt tuning framework which only involves training a small subset of the parameter, minimizes memory usage. This presents a significant advantage over the full fine-tuning method. |Method|Videos/Second|GFLOPs|Memory Cost (GB)| |:-|:-:|:-:|:-:| |VideoMAE (ViT-B)|0.40|1080.0|23.6| |CVPT (ViT-B)|3.80|772.2|13.4| W5:Ln 131 The resulting -> the resulting, There is no notation for g_4 to g_6 in Eq 7, Table 4, E_R -> P_R A5:Thank you for your valuable suggestion. We will revise the typo and rectify any potential errors. Q1:From the Ln124-125, what are the challenges, is there any examples? And is there any evidence to show the inconsistencies between upstream and downstream data? A1:The challenge under consideration encompasses not only divergences in tasks but also disparities in data modalities. This is evident in scenarios addressed in this paper – transitioning from pre-trained raw video large models to downstream tasks involving compressed video that includes additional motion vectors and residuals. Modality gaps also exist when transitioning from upstream RGB-based tracking models to downstream tasks involving RGB+infrared or RGB+depth-based tracking. Furthermore, this challenge extends to cases where upstream image large models are applied to fine-tuning tasks involving video and text modalities. L1:The scalability may be one of the limitations. A1:The progress in compressed video research has led to rapid advancements, despite the relatively smaller dataset scale in comparison to image datasets. Our approach has demonstrated its efficacy on SSv2 dataset which serves as a significant large benchmark. However, due to current constraints, the validation of our method on a larger dataset remains unachievable. --- Rebuttal Comment 1.1: Title: Respond to author rebuttal Comment: Thanks authors for their effort in preparing the rebuttal. After reading through the rebuttal, some of my concerns are solved, such as the confusing tech details and writing part. However, the main concerns are still there for example, the motivation of this work is not well justified, and the scalability . Compared to the K400, the SSv2 is a small scaled dataset in general video understanding benchmarks. Suggest authors to provide more evidence to support the claim in this work. I also try to learn from other reviews who provide accept, however the mentioned strengths were not well supported with evidence. I agree with Reviewer 5c9t, and have similar concerns. As a result, I keep my rating as reject. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your time and efforts in reviewing our paper. Q1: the main concerns are still there for example, the motivation of this work is not well justified, and the scalability . Compared to the K400, the SSv2 is a small scaled dataset in general video understanding benchmarks. A1: In compressed video analysis, previous methods have predominantly focused on refining network architectures tailored to align with the characteristics of compressed video. This endeavor inherently involves pre-training these network structures to enhance their performance. Furthermore, some researchers have explored the formulation of self-supervised pre-training tasks meticulously designed for compressed video. Pre-training stands as a pivotal element within these methods, albeit often consuming a significant amount of time. In contrast, our approach introduces an alternative perspective – harnessing prompts to adapt the pre-trained raw video model for tasks involving compressed video. Given the wealth of existing pre-trained raw video models and the inherent strong correlations between raw and compressed videos. Our method sidesteps the need for the pre-training process while achieving results comparable to other compressed video methods. SSv2 encompasses a training dataset of 169k instances across 174 categories, placing it within the same order of magnitude as K400. The K400 dataset comprises 240k videos spanning 400 categories, and notably, SSv2 stands out as a challenging dataset that incorporates a greater number of motion-centric action categories. Several studies [40, 6, 44, 39] have presented results on SSv2. We follow their approach, utilizing raw video models pre-trained on K400 in a supervised and self-supervised manner. Our report encompasses results from both smaller datasets (HMDB-51 and UCF-101) and the extensive SSv2 dataset. Furthermore, we intend to include results from K400 using the self-supervised pre-trained models.
Summary: The authors present a way to adapt pre-trained raw video models to compressed videos. They utilized the existing concept of prompt tuning from NLP and repurposed it for the compressed video domain. Their findings indicate that by fine-tuning just 0.1 percent of parameters for a downstream task such as video classification, the pre-trained model can be modified to cater to compressed videos. Strengths: * As per my knowledge, this is the first study to explore the prompt tuning method for the compressed video domain. * Enough experimental results are provided to back the claims made in paper * Paper is well written Weaknesses: * Only video classification is shown as a downstream task. * Why are the numbers reported in the submission differ from the numbers reported in the original paper(for eg: CoViAR) is there differences in the experimental setting? * Add Bold text in the tables to highlight the best-performing methods as it is really hard for the readers to sift through the tables in their current condition. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: * Was a LoRA update option considered instead of prompt fine-tuning? If so, can the details of such experiments be mentioned in the supp. material Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: . Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: W1:Only video classification is shown as a downstream task. A1:Our designed modules are tailored to address a wide spectrum of tasks pertaining to compressed video. We follow [6] and provide experiments on video classification as a downstream task. Nonetheless, it is necessary in extending our validation to encompass additional tasks. We employed a self-supervised pre-trained ViT-B and subsequently conducted full fine-tuning or prompt tuning on the UCF-101 dataset for video retrieval task. In this context, we extracted the averaged tokens from the last transformer block to serve as the feature representation. the features of the test video queries were matched against the k-nearest-neighbors within the training set features. For assessment, we adopted the recall at k (R@k) metric, consistent with [15, 19]. R@k quantifies the proportion of queries where the top-k nearest neighbors include at least one video belonging to the same class. The experimental results also provide clear evidence of the improvements achieved by our method over full fine-tuning in the context of the retrieval task. | Method | R@1 | R@5 | R@10 | R@20 | R@50 | | :-------------: | :--: | :--: | :--: | :--: | :--: | | full fine-tune | 91.4 | 94.3 | 95.3 | 96.5 | 96.7 | | CVPT$^*$ | 87.6 | 90.5 | 92.4 | 95.3 | 96.4 | | AdaptFormer$^*$ | 82.3 | 85.3 | 89.9 | 93.7 | 97.0 | | CVPT | 92.1 | 96.3 | 97.5 | 98.7 | 99.5 | W2:Why are the numbers reported in the submission differ from the numbers reported in the original paper(for eg: CoViAR) Are there differences in the experimental setting? A2:The results reported for CoViAR in the original paper are grounded under supervised pre-training. In contrast, our study showcases the results under self-supervised pre-training, as reported by IMRNet. Given the prevalence of self-supervised pre-training among the methods being compared, we present CoViAR's results based on self-supervised pre-training, as outlined in the IMRNet. Additionally, we consider incorporating the result of CoViAR under supervised pre-training to enhance the comprehensiveness of our paper. W3:Add Bold text in the tables to highlight the best-performing methods as it is really hard for the readers to sift through the tables in their current condition. A3:Thanks for your suggestion. We will make improvements in next version. Q1:Was a LoRA update option considered instead of prompt fine-tuning? If so, can the details of such experiments be mentioned in the supp. material A1:LoRA introduces an auxiliary module that employs lower-order matrices through mapping weights within the multi-head attention layer. During training, only this auxiliary module is actively trained. This method shares similarities with the practice of enhancing a frozen model with adapters. In contrast to LoRA, the prompt tuning framework offers heightened flexibility in incorporating varying quantities of prompts at diverse positions. Furthermore, this approach simplifies the integration of a priori information into the prompt construction process. While the application of LoRA for compressed video falls outside the scope of this paper, we consider the prospect of exploring this avenue in future endeavors.
Summary: The paper presents one alternative way of finetuning to work on compressed videos. Specifically, it designs a specific data flow within three modalities (RGB, residual, and motion vector). It also presents the way to make the model adapt to new compressed videos and provide a fair comparison. It demonstrates SOTA performance to understand the proposed setting. Strengths: The problem this paper studies is pretty interesting and useful. Their method design is also complete and thinks about the alternatives. The presentation is pretty clear while the results are SOTA under their setting. Weaknesses: The motivation of residual gating motion vector and gating I-Frame information is still weak to me. It would be better if the author can provide more evidence (exps and visualization). The presentation of CPR is a little weak to me and needs to be a little more clear. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Can you also compare the training time & training epochs when you compare with finetune, linear probing, etc? Can you also compare the data scale between the data used to train the frozen model and the data used for your method training? If we change the order of motion vector and residual, will the results be better than Fig5b? (motion vector gating residual and gating I-frame.) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: W1: The motivation of residual gating motion vector and gating I-Frame information is still weak to me. It would be better if the author can provide more evidence (exps and visualization). A1:In compressed video, motion vectors capture motion displacement between preceding and subsequent frames via block matching, while residuals represent the remaining error after motion estimation. Earlier studies [38, 43] have utilized residuals to help rectify erroneous motion information within motion vectors,owing to the coarse and noisy nature of the latter. In this context, we propose the utilization of residuals to generate attention patterns, serving to refine the motion information. We extend the visualization through Figure A, which provides additional insight into the utilization of motion vectors to generate attention for residuals and utilization of residuals to generate attention for I-frames. The experimental results of these strategies are compared below. The generation of attention maps using motion vectors does not exhibit a notable filtering effect when observed through visualization. Consequently, the performance is notably akin to that w/o residual gating. | Method | Acc. | | :---------------------------: | :--: | | w/o residual gating | 86.9 | | Motion vector gating residual | 87.0 | | Ours | 87.6 | W2:The presentation of CPR is a little weak to me and needs to be a little more clear. A2:Thanks for your suggestion. Every modality within the compressed video exhibits incompleteness and significant inter-modality variations. Simultaneously, the presence of I-frames and the other two modalities is mutually exclusive. Thus, we consider employ distinct embedding functions to encode motion vectors and residuals into separate conditional prompts. Given the relatively longer temporal associated with motion vectors and residuals, an extended temporal tube is utilized to ensure uniform token number across all three modalities. Due to its primary emphasis on edge information within the motion [37], the residual plays a crucial role in enhancing recognition tasks within this specific region. Thus, our approach employs residuals to generate an attention map for filtering motion vectors. Q1:Can you also compare the training time & training epochs when you compare with finetune, linear probing, etc? A1:we present a comparative analysis of the training time for three distinct methods when conducting 100 epochs on the UCF-101 dataset. Our approach achieves a reduction of over 50% in training time compared to full fine-tuning. Additionally, the training time aligns comparably with the linear probe configuration. | Method | Training Time | Training Epoch | | :------------: | :-----------: | :------------: | | full fine-tune | 10h | 100 | | linear probe | 4.2h | 100 | | CVPT | 4.8h | 100 | Q2:Can you also compare the data scale between the data used to train the frozen model and the data used for your method training? A2:The frozen model is pre-trained using the large dataset (Kinetics-400), employing raw videos in both supervised and self-supervised training manners. In contrast, we employed UCF-101, HMDB-51 and SSv2 datasets as distinct training data, respectively. | Data used for Frozen Model | Data used for our Method | | :------------------------: | :----------------------: | | Kinetics-400 (240k videos) | UCF-101 (9.6k videos) | | Kinetics-400 (240k videos) | HMDB-51 (6.8k videos) | | Kinetics-400 (240k videos) | SSv2 (169k videos) | Q3:If we change the order of motion vector and residual, will the results be better than Fig5b? (motion vector gating residual and gating I-frame.) A3:We present the experimental results of gating residuals with motion vectors in W1 & A1, yielding a marginal increase of 0.1% compared to the result depicted in Figure 5(b). This finding suggests that the attention generated with motion vectors may not effectively perform the desired filtering, both in terms of visualization and experimental results.
Summary: This paper proposes an efficient fine-tuning method based on the prompting concept in the compressed video domain. The intuition is to freeze the backbone pre-trained on raw videos and use the proposed prompting techniques to query the required information for the compressed videos. To address the multi-modality of the compressed videos (RGB images, motion vectors, and residuals), the authors propose embedding them first and using a SCCP module to fuse and refine them. The SCCP module is designed based on the fact that the video tasks are more important and sensitive to the motion boundaries. Therefore, the residual map acts as a condition to attend the motion vector map. The results are then added back to the RGB image tokens. The outputs of SCCP are then passed to the next pre-trained layer. Superior performance is obtained and the gap between raw video is shortened. Strengths: - The idea is simple and easy to follow. - The efficient design is important to the video tasks as the backbone can be frozen. The overall learnable parameter is significantly small compared to the backbone. Therefore, for different downstream tasks, the storing problem can be alleviated. - The idea of using the residual map to attend to the motion vectors sounds novel and interesting. Weaknesses: * Some operations in the proposed method are confusing: - In Eq. 6, what is the physical meaning of adding an attended motion vector to image embeddings/tokens? The motion vectors represent the relative movement information for each of the spatial blocks (movement in x and y directions in the form of vectors), while the RGB embedding contains spatial information/structure. The addition operation does not really make sense. - The processed \mathcal{M}_l(E)_I^l) is added to E_I^l again, what is the intuition behind this operation? The overall sequence is: the motion vector is attended by the residual information and then added back to the raw image. The output is further processed and added back to the raw image. How to physically explain this? * The overall architecture is very similar to [51]. What is the fundamental difference? Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: L3 should be: existing methods for compressed video classification/application … ? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: Limitations are addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: W1:Some operations in the proposed method are confusing: In Eq. 6, what is the physical meaning of adding an attended motion vector to image embeddings/tokens? The motion vectors represent the relative movement information for each of the spatial blocks (movement in x and y directions in the form of vectors), while the RGB embedding contains spatial information/structure. The addition operation does not really make sense. A1:The I-frame tokens are embedded via sparsely sampled I-frame sequence which encompasses both spatial and motion information. However, the intrinsic motion details within these tokens are relatively subdued. To address this limitation, we enhance the I-frame features by incorporating motion vector attributes. This augmentation effectively enriches the motion-related aspects within the I-frame feature representation. Notably, this strategy is commonly employed in both the compressed video domain [42, 45, 28], where RGB and motion information are fused. Although we also considered concatenation as an alternative to addition, we found that it yielded similar performance while incurring higher computational complexity. In raw video domain, where RGB and optical flow information are merged similarly in [52, 53, 54]. [52]. Simonyan K, Zisserman A. Two-stream convolutional networks for action recognition in videos. In NeurIPS, 2014. [53]. Wang Y, Long M, Wang J, et al. Spatiotemporal pyramid network for video action recognition. In CVPR, 2017. [54]. Xie D, Deng C, Wang H, et al. Semantic adversarial network with multi-scale pyramid attention for video classification.In AAAI, 2019. W2:The processed $\mathcal{M}_l(E)_I^l)$ is added to $E_I^l$ again, what is the intuition behind this operation? The overall sequence is: the motion vector is attended by the residual information and then added back to the raw image. The output is further processed and added back to the raw image. How to physically explain this? A2:This operation aimed at incorporating prompts into input tokens, thus facilitating the efficient tuning of the backbone for adaptation to downstream tasks. Within the SCCP module, its objective is to enhance the I-frames feature with essential motion cues. After the SCCP processing, the updated conditional prompts are propagated to the subsequent SCCP layer. Simultaneously, augmented features are assimilated into the input token through an additive mechanism which is a routine operation of the prompt framework. W3:The overall architecture is very similar to [51]. What is the fundamental difference? A3:Indeed, our method draws inspiration from [51]. Nevertheless, when dealing with the prompt tuning challenge posed by compressed video, a unique complexity arises. **Firstly**, each individual modality within compressed video is incomplete and exhibits significant differences among the modalities. Thus, we adopt distinct prompts for encoding motion vectors and residuals within the compressed video. This departure from the unified prompt approach in [1] enhances the efficiency of the entire tuning process. We adopt a similar architecture, as illustrated in Figure 5(a) for comparison. In Table 4, we present a comparative analysis, substantiating how our distinct prompt strategy enhances the efficacy of prompt tuning. **Secondly**, motion cues in P-frames are coarse and noisy. Leveraging the inter-modality correlation, we introduce a novel operation that purifies motion cues within the SCCP module. This operation facilitates the integration of more accurate motion information into the inputs, consequently bolstering the performance. In contrast, [1] benefits from the utilization of comprehensive and accurate diverse modalities, thereby eliminating the necessity for such refinement. Q1:L3 should be: existing methods for compressed video classification/application … ? A1:Thank you for your suggestion. We will make modifications to enhance precision and clarity within the text. --- Rebuttal Comment 1.1: Title: Respond to author rebuttal Comment: I would like to thank the authors for the rebuttal. However, after reading it, my concerns are not resolved, I decided to retain my rating and vote to rejection, please see the following: * Adding an attended motion vector to I-frame image imbedding is still unclear and inappropriate. The 3 mentioned references are not using similar merging operations: - [52]: warping is used which is more appropriate and makes sense. - [53]: mentions that ‘element-wise sum and concatenation do not capture the interactions across the spatial and temporal features, so they may suffer from substantial information loss.’ And they use bilinear fusion operation to model the correlation between each element in spatial and temporal features - [54]: no interaction operation is mentioned. * The physical meaning is still missing. Why additive mechanism is a routine operation of the prompt framework? * From the authors response, the main difference between [51] is the separately processing of two additional modalities, which may limit the novelty. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your time and efforts in reviewing our paper. Q1: Adding an attended motion vector to I-frame image imbedding is still unclear and inappropriate. The 3 mentioned references are not using similar merging operations: A1: The comprehensive exploration of spatio-temporal information stands as a fundamental challenge in video representation learning. In regards of compressed videos, I-frames present themselves with sparsely sampled RGB frame sequences from raw videos, encompassing not only spatial characteristics but also partial motion details. However, the limited sampling density results in insufficient representation of motion information, necessitating supplementation through the integration of motion vectors. This process embodies the physical meaning of fusing motion vector embeddings into the I-frame embedding. As for fusion strategies, there are some alternatives in literature besides addition, such as concatenation [45], lateral connection [28] or score-level late fusion [42]. We have already tried these alternatives, and the experimental results indicate their suboptimal efficacy and resource-intensive nature (Utilizing concatenation for fusion in our method leads to an additional 4% increase in parameter count). Therefore, we adopt the simple yet effective addition operation in our submission. Regarding [52,53,54], we’d like to make a further clarification. Similarity refers to the fact that within the raw video approach, both the optical flow modality and image embedding, akin to the characteristics of motion vectors, are employed for fusion. Compared to motion vectors, optical flow represents the pixel-wise relative movement information (movement in x and y directions in the form of vectors), resulting in a more intricate and denser flow representation. The specific fusion strategy in [52-54] isn't the primary focus of citing those literatures. Furthermore, we also delve into the fusion strategy adopted by [52-54]. Both [52] and [54] used late fusion at the score level, a fusion that allows the two branches to interact only in the final stage. [27, 28] have indicated that multi-stage feature-level fusion is better than late fusion. [53], on the other hand, uses a better fusion strategy for fusion at the end, but he has a higher computational effort in the fusion stage, which is not suitable for fusion at different stages. Q2:The physical meaning is still missing. Why additive mechanism is a routine operation of the prompt framework? A2: We will make a more detailed clarification on the physical meaning. $E_I^l$ refers to input data, $\mathcal{M}_l(E)_I^l)$ indicates inserted prompt. By adding $\mathcal{M}_l(E)_I^l)$ into $E_I^l$ the process involves inserting the prompt into the data for fine-tuning. The physical meaning of this process is using the prompt interacts with the input Iframe to adapt the RGB pre-trained model for the downstream compressed video task. In literature, the additive mechanism is widely used in prompt tuning, such as [52], [22], based on which we call it a routine operation. Q3: From the authors response, the main difference between [51] is the separately processing of two additional modalities, which may limit the novelty. A3: Actually, the main difference is more than separately processing of two additional modalities, and we’d like to make this point clearer. Despite that our work is inspired by [51], our work is substantially different from it in the following aspects. First, [51] is designed for multi-modal tracking for raw videos. In contrast, our work aims to adapt a pre-trained raw video based model to downstream compressed video based vision tasks, which is firstly investigated and extensively evaluated by comparing to existing SOTA approaches in our work. Second, [51] neglects to deal with the core challenge in compressed video, \emph{i.e.} how to integrate the coarse and noisy motion cues in motion vectors and the incomplete spatio-temporal embeddings from I-frames due to sparse sampling. Alternatively, our work develops the SCCP module to specifically address this challenge,by SCCP module which taking in the Iframe token input, as well as the motion vector prompt and the residual prompt. It then generates fused Iframes prompt while simultaneously updating the motion vector and residual prompt. Besides, we also present novel module to refine the coarse and noisy motion vectors by leveraging the residual-based attention. Based on the above difference, as displayed in Table 4, our method achieves significant improvements, with 3.8% on HMDB-51 and 2.1% on UCF-101, compared to [51], clearly showing the effectiveness of the proposed novel components.
Rebuttal 1: Rebuttal: We are appreciative of the valuable insights shared by the reviewers. In response, we have thoroughly addressed each comment, providing individualized responses to each reviewer. The supplied PDF includes visualizations corresponding to Weakness 1, as referenced by Reviewer Eouy. Pdf: /pdf/f89cccb9cfec502c71854647d7837ceb7c89c1cd.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper explores how to transfer pretrained RGB models to compressed videos with a parameter-efficient paradigm and introduces a prompt-tuning method named Compressed Video Prompt Tuning (CVPT). In CVPT, the learnable prompts are replaced with encoded compressed modalities that are refined in each layer. To improve cross-modal interactions between prompts and RGB input flow, this paper proposes a Selective Cross-modal Complementary Prompter block that refines the motion cues and complements other modalities to the RGB modality. Experimental results show that CVPT outperforms full fine-finetuning and other prompt-tuning methods on SSv2, UCF101 and HMDB51 dataset. Strengths: 1. The proposed prompt-tuning method designed for compressed videos can leverage the pretrained RGB models with a few trainable parameters. 2. On compressed videos, CVPT outperforms full fine-finetuning and other prompt-tuning methods (VPT, AdaptFormer) on SSv2, UCF101 and HMDB51 datasets. Weaknesses: 1. The computational cost (GLOPs) of the proposed method may be higher than that of some previous works based on ViT. According to (1), the tokens of I-frames, motion vector prompts and residual prompts are all sent to each layer of pretrained ViT, so the number of input tokens may be larger than that of some previous ViT-based models. 2. This paper says that compressed videos "provide notable advantages in terms of processing efficiency", but there is no efficiency comparison with previous works based on raw videos. It would be better to provide the inference time comparison between previous works (especially RGB models) and the proposed method. 3. Some state-of-the-art methods[2,3] on the UCF101, HMDB51, and SSv2 dataset are not compared in this paper. 4. The idea of this paper is mainly from [1], as mentioned in this paper, but the exploration of compressed video understanding is encouraged. - [1] Jiawen Zhu, Simiao Lai, Xin Chen, Dong Wang, and Huchuan Lu. Visual prompt multi-modal tracking. CVPR, 2023. - [2] Christoph Feichtenhofer, Haoqi Fan, Bo Xiong, Ross Girshick, and Kaiming He. A large-scale study on unsupervised spatiotemporal representation learning. CVPR 2021. - [3] Rui Wang, Dongdong Chen, Zuxuan Wu, Yinpeng Chen, Xiyang Dai, Mengchen Liu, Lu Yuan, and Yu-Gang Jiang. Masked video distillation: Rethinking masked feature modeling for self-supervised video representation learning. CVPR 2023. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The comparison of GLOPs should be provided in Table 1 & Table 2. 2. It would be better to provide the inference time comparison between RGB models and the proposed method. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have addressed the limitations of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: W1&Q1:The computational cost (GLOPs) of the proposed method may be higher than that of some previous works based on ViT. According to (1), the tokens of I-frames, motion vector prompts and residual prompts are all sent to each layer of pre-trained ViT, so the number of input tokens may be larger than that of some previous ViT-based models. & The comparison of GLOPs should be provided in Table 1 & Table 2. A1:In our proposed method, while other modalities are employed, each modality is sparsely sampled. In contrast to previous raw video ViT-based approaches (*e.g.* VideoMAE, MotionMAE, MVD), our method yields a reduction in the number of input tokens. Typically, these approaches sample either 16 or 32 frames from a 64 frames sequence with intervals of 4 or 2. In contrast, we sample only 4 I-frames from 4 continuous GOPs, spanning a 48 frames sequence. In parallel, we selectively extract 3 P-frames (comprising motion vectors and residuals) from each GOP, which serve as the input to our model. Furthermore, Our method also has reduced tokens compared to previously compressed video ViT-based methods (*e.g.* MM-ViT) which sample the same number of frames to form an input size equivalent to the raw video ViT-based methods. Thank you for your suggestion. Due to time constraints, we currently present the GFLOPs of ViT-based approach below. A notable observation emerges that our method has lower GFLOPs compared to other methods due to the reduction in inputs. We intend to provide a comprehensive report of GFLOPs for all methods later. |Type of Method| Method |GFLOPs| | :-: |:-:|:-:| |Raw|$M^3$Video|1080.0| |Raw|MotionMAE|1080.0| |Raw|VideoMAE|1080.0| |Raw|MVD-B|1080.0| |Raw|VPT|1089.6| |Raw|Adaptformer|1093.8| |Compressed|CoViAR|1222.0| |Compressed|Full Fine-tune|772.2| |Compressed|Linear Probe|772.2| |Compressed|VPT$^*$|778.2| |Compressed|AdaptFormer$^*$|780.6| |Compressed|Ours|772.2| W2&Q2:This paper says that compressed videos "provide notable advantages in terms of processing efficiency", but there is no efficiency comparison with previous works based on raw videos. It would be better to provide the inference time comparison between previous works (especially RGB models) and the proposed method. & It would be better to provide the inference time comparison between RGB models and the proposed method. A2:Thanks for your suggestion. The efficiency inherent in compressed video analysis manifests through its direct evaluative capacity, obviating the need for video decoding. We present a comprehensive comparison of inference times between our proposed compressed video method and raw video method (*e.g.* VideoMAE) below. Remarkably, our approach demonstrates a significant reduction in pre-processing while maintaining a comparable inference time for model inference. |Method|Pre-Process (ms)|Model Inference (ms)|Full Pipeline (ms)| | :-: | :-: | :-: | :-: | |VideoMAE|2496.9|26.9|2523.8| |CVPT (Ours)|238.3|25.0|263.3| W3:Some state-of-the-art methods[2,3] on the UCF101, HMDB51, and SSv2 datasets are not compared in this paper. A3:We deeply value your suggestion. Both of the two methods are raw video-based methods which leverage a more consistent and comprehensive RGB modality information. Consequently, these approaches exhibit high performance in comparison to our method. Nonetheless, it is noteworthy that their reliance on video decoding, followed by subsequent analysis makes them relatively less efficient in terms of inference time compared to our approach. Furthermore, it is pertinent to highlight that both of these methods necessitate full fine-tuning, whereas our method focuses on tuning only a notably smaller subset of parameters. Finally, we will supplement the comparison with the aforementioned methods by incorporating them into both Table 1 and Table 2. |Method|Modality|Model|Input Size [M]|PT Manner|Tunable Params. [M]|HMDB-51|UCF-101| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |MVD-B|RGB|ViT-B|24.1|SSL|86.4 (100%)|76.4| 97.0| |$\rho$BYOL|RGB|R3D-50|12.1|SSL|46.9 (100%)|73.6|95.5| |Ours|I-Frame+MV+Res|ViT-B|6.1|SSL|0.5 (0.6%)|62.9|89.0| |Method|Modality|Model|Input Size [M]|PT Manner|Tunable Params. [M]|SSv2| |-|-|-|-|-|-|-| |MVD-B|RGB|ViT-B|24.1|SSL|86.4 (100%)|72.5| |$\rho$BYOL|RGB|R3D-50|12.1|SSL|46.9 (100%)|55.8| |Ours|I-Frame+MV+Res|ViT-B|6.1|SSL|0.5 (0.6%)|58.4| W4:The idea of this paper is mainly from [1], as mentioned in this paper, but the exploration of compressed video understanding is encouraged. A4:Indeed, our method draws inspiration from [1]. Nevertheless, when dealing with the prompt tuning challenge posed by compressed video, a unique complexity arises. **Firstly**, each individual modality within compressed video is incomplete and exhibits significant differences among the modalities. Thus, we adopt distinct prompts for encoding motion vectors and residuals within the compressed video. This departure from the unified prompt approach in [1] enhances the efficiency of the entire tuning process. We adopt a similar architecture, as illustrated in Figure 5(a) for comparison. In Table 4, we present a comparative analysis, substantiating how our distinct prompt strategy enhances the efficacy of prompt tuning. **Secondly**, motion cues in P-frames are coarse and noisy. Leveraging the inter-modality correlation, we introduce a novel operation that purifies motion cues within the SCCP module. This operation facilitates the integration of more accurate motion information into the inputs, consequently bolstering the performance. In contrast, [1] benefits from the utilization of comprehensive and accurate diverse modalities, thereby eliminating the necessity for such refinement. --- Rebuttal Comment 1.1: Comment: Thanks for authors’ effort in the rebuttal. The responses have addressed most of my questions. However, I noticed that a reviewer has raised doubts about the physical significance of the interaction between motion vector stream and spatial embeddings, and I am also interested in this issue. Overall, I will raise my rate, and I will also adjust my rate based on the answers to the above question. I hope the author incorporates the newly added experimental results and comparisons from the rebuttal into the final version of the paper. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your time and efforts in reviewing our paper. We intend to enhance the comprehensiveness of our paper by incorporating additional experiments. Q1: I noticed that a reviewer has raised doubts about the physical significance of the interaction between motion vector stream and spatial embeddings, and I am also interested in this issue. A1: The comprehensive exploration of spatio-temporal information stands as a fundamental challenge in video representation learning. In regards of compressed videos, I-frames present themselves with sparsely sampled RGB frame sequences from raw videos, encompassing not only spatial characteristics but also partial motion details. However, the limited sampling density results in insufficient representation of motion information, necessitating supplementation through the integration of motion vectors. This process embodies the physical meaning of fusing motion vector embeddings into the I-frame embedding.
null
null
null
null
null
null
Training on Foveated Images Improves Robustness to Adversarial Attacks
Accept (poster)
Summary: This paper studies the effect of foveation via adaptive gaussian blurring and color modulation in training on an image. Their goal is not computer vision driven, nor ML based, but rather to shed light on the physiological nature of the retina and spatially adaptive computation in humans -- that may prove useful for machines towards achieving robustness without adversarial training. Strengths: * This paper was very easy to read, and understand. All the figures and tables are coherent, so I applaud the efforts of presentation of the authors. * Authors co-modulate adaptive gaussian blurring with color loss and noise (however the contribution of each factor in the "foveation" is not obvious). * Authors present a solid list of experiments in o.o.d. experiments and also adversarial attacks. Their benchmarks regarding adversarial training are on spot (though which type of AT I believe is not precised). * Their claims are grounded, and not too strong, so that is good given the evidence they have presented and also works that other authors have done. Weaknesses: I believe the strongest weakness is that it's not clear to me what the contribution is of each effect : 1) noise ; 2) adaptive gaussian blurring; 3) fixations. I wonder to what point do the extra fixations add as a proxy for data augmentation when training the model. I.e.: can a false conclusion be arrived that "foveation aids in robustness" by virtue of adding extra images. I think authors need to add 2 more controls for me to be more enthusiastic about this paper: 1) Train all non-foveated networks with additional sets of images via data augmentation procedures (like rotation, mirroring, random crop and blurring) 2) Train all foveated networks with less images to match the original dataset size of the networks that recieve non-foveated inputs. In addition it's not clear to me why authors add noise as part of the foveation. Why not just go straight towards the adaptive gaussian blurring? Is there a psychological reason to add the noise in the foveation process? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: * Missing References (to add in line 32 and through-out the paper regarding links of biological vision (peripheral computation) + adversarial robustness): **“Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks”. Harrington & Deza. ICLR 2022.** * Missing Reference in Line 51: I believe Pramod et al. did in fact perform robustness tests on their foveated adaptive blur image. * Missing Reference in Line 52: Characterizing a snapshot of perceptual experience. By Cohen et al. from the Journal of Experimental Pscyhology 2021. Overall I think this paper is a good contribution to the field, it should provide further discussion at NeurIPS about foveation, which many vision scientists have been longing to see these type of augmentations be used in computer vision. It would be quite nice if authors release a R-Blur augmentatino module in pytorch. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: See Weaknesses above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for providing valuable feedback on our paper, and asking thoughtful questions. We are glad and encouraged to find that the reviewer liked our work and found it to be unique and interesting. We hope that in our responses below we will be able to fully address the reviewer's outstanding concerns and further improve the reviewer's opinion of our work. **I believe the strongest weakness is that it's not clear to me what the contribution is of each effect : 1) noise ; 2) adaptive gaussian blurring; 3) fixations.** We agree with the reviewer that it is essential to gauge the impact of the different components of R-Blur, and therefore we have conducted an ablation study and presented it in Section 3.3 and Figure 8. To ascertain the impact of each component of R-Blur we removed or disabled each component one by one, while leaving all the other components in place. We find that removing the noise led to the greatest reduction in accuracy under adversarial attack (-33%), followed by the removal of adaptive blurring (-27%) and using 1 fixation instead of 5 (-10%). We hope that this addresses the reviewer’s concerns. We also encourage the reviewer to refer to section 3.3 for more details. If our analysis is lacking in any way we would be eager to further discuss this with the reviewer and do everything possible to improve it. **[...] can a false conclusion be arrived that "foveation aids in robustness" by virtue of adding extra images. I think authors need to add 2 more controls for me to be more enthusiastic about this paper: (1)Train all non-foveated networks with additional sets of images via data augmentation procedures (like rotation, mirroring, random crop and blurring. (2) Train all foveated networks with less images to match the original dataset size of the networks that recieve non-foveated inputs.** These tests are, in fact, reported, although, perhaps we did not communicate this clearly in our writing. Specifically, we train all our models (those with R-Blur and those without) with RandAugment or AutoAugment, which includes the data augmentations mentioned by the reviewer in addition to a wide variety of other non-geometric transformations. While we have indicated this in section 3.1 (Ln165-166), we will try to make it more explicit that the data augmentations are the same for all the models. Under these conditions we believe that the current evaluation set is fair. Apart from the baselines, all the models are trained with the same number of data augmentations, that is RandAugment + (PGD attack/R-Warp/VOneBlock/R-Blur). Nevertheless, we would be happy to further discuss with the reviewer if there is something that we missed and would try to make the evaluation as fair as possible. **it's not clear to me why authors add noise as part of the foveation.** We add noise to simulate the stochasticity in the responses of the biological neurons in the retina (Croner, et.al 1993). We seem to have omitted this citation from the paper, but we will update the paper to include it. Croner LJ, Purpura K, Kaplan E. Response variability in retinal ganglion cells of primates. Proc Natl Acad Sci U S A. 1993 Sep 1;90(17):8128-30. doi: 10.1073/pnas.90.17.8128. PMID: 8367474; PMCID: PMC47301. **Missing References (to add in line 32 and through-out the paper regarding links of biological vision (peripheral computation) + adversarial robustness): “Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks”. Harrington & Deza. ICLR 2022.** We thank the reviewer for pointing us to this paper, we will cite this in the paper. **Missing Reference in Line 51: I believe Pramod et al. did in fact perform robustness tests on their foveated adaptive blur image.** We were unable to find robustness experiments in [29]. We referred to the version on Arxiv which is also the one we cited, but we also looked at the version on PubMed. If we overlooked or misunderstood something in [29] we would be happy to be corrected. **Missing Reference in Line 52: Characterizing a snapshot of perceptual experience. By Cohen et al. from the Journal of Experimental Pscyhology 2021.** We thank the reviewer for pointing us to this paper, we will cite this in the paper. **It would be quite nice if authors release a R-Blur augmentatino module in pytorch.** PyTorch code is included in the supplementary material and will be made public on GitHub after publication. We hope that our responses fully address the reviewer's concerns and we kindly request the reviewer to consider increasing their score of our paper. --- Rebuttal Comment 1.1: Title: Thank you for addressing my comments | Worth high-lighting role of Noise Comment: Thank you for addressing my concerns. I will increase my score from 6 to 7. I think what definitely needs to be high-lighted (as another review er pointed out) is the role of noise in this framework. There is a paper I believe also cited in the submission by Dapello, Marques et al. NeurIPS 2020 (Simulating a Primary Visual Cortex at the Front of CNNs Improves Robustness to Image Perturbations) that has a similar story where most of the job is being done by the noise rather than the Gabors, so all-in-all authors should find a way to add stress this in the discussion so that future works should also analyze the co-modulation of noise with certain inductive biases and how they affect adversarial robustness in DNNs (to see what property is carrying the robustness weight). --- Reply to Comment 1.1.1: Comment: We are glad that we were able to fully address the reviewer's concern, and we thank them for increasing their score. The reviewer's recommendation is well taken and we will definitely highlight the role of noise in the camera-ready version.
Summary: In their paper the authors introduce R-Blur as a foveation technique for biologically inspired defense against adversarial attacks on DNNs. With their approach they try to simulate the human visual field by blurring and desaturating the image depending on the distance to a given fixation point. They continue to evaluate their approach on Ecoset and ImageNet and try to show that R-Blur can achieve results like adversarial training. Furthermore, they provide an ablation study to prove the significance of the individual components. Strengths: - I find the idea of taking a biologically inspired approach to foveation and the attempt to recreate the human perceptive field through color and grey acuity appealing. - The results for the common non-adversarial corruption show an improvement compared to other methods indicating some significance. Weaknesses: - In Figure 5 we only see a comparison with two baselines that feature little to no adversarial defense. Here it is clear that R-Blur achieves higher accuracies under white-box attacks. A comparison with adversarial training would be more insightful, because later only the mean value of these experiments is presented which indicates that AT outperforms R-Blur by a margin. This weakens the statement that R-Blur generalizes robustness. - The ablation study provides some insights into the importance of some components but is not explained adequately, e.g., the dynamic selection of the fixation point seems to worsen the performance. - The formatting to indicate the best and the second-best entries of Table 1 is somewhat misleading: it would be better to underline the second-best result or use colors. - Writing: Some of the figure captions are not comprehensible or not descriptive enough (e.g., Figure 2 and Figure 3). Overall: Although R-Blur is well-motivated and implemented in a reasonable way, only a small part of the experiments indicates a significant improvement (mainly the non-adversarial corruptions, Table 1.) while in general adversarial training seems to be superior. Currently I decide for a borderline reject, but if the before mentioned weaknesses are addressed and the following questions are answered, I am willing to increase the rating. Minor Detail: Some figures (e.g., Figure 7 and especially Figure 2) are too small and are sometimes missing entries (e.g., R-Blur-5FL in Figure 7(b).1 or 0 entries in Figure 5) Technical Quality: 3 good Clarity: 3 good Questions for Authors: - The main motivation for the parameter setting for the visual acuity estimation is based on the photopic and visual acuity in human vision which should approximate the real curves (Figure 1(a)). It is not entirely clear how this estimated acuity relates to the original, because there it is measured on the x-axis through the degree in the visual field. In Figure 1 it seems to be the distance-based eccentricity although it should be zero were the original had zero degrees. How exactly does your approximation match the original and why does the transfer work? - In the related work section several other foveation-based robustness models are mentioned and only a very short comparison is given. A more detailed comparison with other foveation techniques in general would be insightful as well. It is mentioned that these techniques were not used in an adversarial context before, but how does R-Blur stand out? - For the eccentricity computation only on maximum of the Manhattan-Norm was considered. Were other quantified metrics (e.g., L2-Norm as in the actual human perceptive field) used in the experiments and how did they perform? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors address their limitations adequately in their Limitations section. These include a significant loss of clean accuracy and the current fixpoint selection method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for providing valuable feedback on our paper. We find it encouraging that the reviewer will earnestly consider improving their rating if their questions are answered. **In Figure 5 [...] A comparison with adversarial training would be more insightful** The goal of Figure 5 is to show that simulating foveation with R-Blur improves robustness. This can be broken into two sub-goals: (1) showing that augmenting a model with R-Blur improves its adversarial robustness (ResNet vs R-Blur), and (2) showing that the improvement in robustness is due to simulating foveation and not due to simply applying any arbitrary transforms at inference time (RandAffine vs. R-Blur). Furthermore, we do not claim that R-Blur is a SOTA adversarial defense, and, therefore, we do not position it as a competitor of AT as far as whitebox adversarial attacks are concerned. Therefore, including AT in Figure 5 would only convolute the message it was meant to convey. However, at the reviewer’s request, we have created Figure 1 in the global response that compares the accuracy of R-Blur and AT at different adversarial perturbation sizes. Likewise, figure 5 in the appendix shows the breakdown of the accuracy of all the models against common corruptions of various types and strengths. **[...]AT outperforms R-Blur by a margin. This weakens the statement that R-Blur generalizes robustness.** We would like to point out that AT outperforms R-Blur only on whitebox attacks but not on non-adversarial corruptions. It is expected that AT models will be more robust to whitebox attacks. This is because they are trained on adversarial attacks that are very similar to the attacks used during testing, and therefore the gap between the training and test distribution of AT models is much smaller than models like R-Blur which were not trained on adversarially perturbed data. For this reason, we state in Ln255-256 that we are comparing AT and R-Blur primarily in terms of their robustness to non-adversarial image perturbations. From Table 1 we see that AT has almost no impact on robustness to non-adversarial image perturbations. On the other hand, R-Blur improves the robustness of the model, not only to adversarial attacks but also to non-adversarial perturbations. These results show that the robustness of AT is limited to a small class of perturbations generated by norm-bounded adversarial attacks, while the robustness of R-Blur generalizes better across different types of perturbations. We acknowledge that a lack of clarity in our presentation might have given rise to this misunderstanding. Perhaps the mean scores in columns 2-4 are not necessary and are obscuring the point we intend to convey. We will consider removing them. ** importance of dynamic fixation selection not explained adequately in ablation** We had decided to not discuss the dynamic fixation selection in the text for two reasons: (1) the fixation selection model (DeepGaze-III) is not part of R-Blur and we only use it as a tool, and (2) Figure 8 indicates that switching between dynamic and static fixations trades off <= 1% accuracy for robustness, however verifying this trade-off would require more experimentation which we felt was out of the scope of this paper. Nevertheless, we will briefly discuss these points in the text. **[...]In Figure 1 it seems to be the distance-based eccentricity[...]** We request the reviewer to please correct us if we’re wrong but it seems that the reviewer is referring to Figure 2, not Figure 1. The x-axis in Figure 2 is indeed mislabeled and is showing the index of the pixel on a horizontal line through the fixation point. We have updated this figure to reflect the eccentricity as computed by equation 1. We will include this figure in the paper and have also included it in the global response (Figure 2). **[...]A more detailed comparison with other foveation techniques[...].** We will include the following comparison in the paper. [19,20,21,22] claim to simulate some aspect of foveation. [19] implemented foveation by cropping the salient region of the image at inference time. Firstly, the biological plausibility of [19] is questionable because instead of simulating the degradation of acuity in the periphery of the visual field, it simply discards it. Secondly, they crop the image _after_ applying the adversarial attack, which likely obfuscates the gradients, and hence any robustness they report is suspect. On the other hand, [20] and [21] apply foveation in the latent feature space (the intermediate feature maps generated by a CNN) rather than directly to the image pixels as we do so their methods are not directly comparable to ours. To the best of our knowledge [22] (R-Warp) is the only foveation-based adversarial defense that is biologically plausible, works directly on the pixels, and avoids gradient obfuscation, which is why we compare against it in this paper. Furthermore, in Ln50-52 we do acknowledge that [28,29] also simulated foveation via adaptive blurring, and, point out that R-Blur builds upon these methods by (1) also simulating the loss in color sensitivity and the stochasticity of neural responses, and (2) rigorously evaluating the impact on robustness. **Were other quantified metrics [for eccentricity] used [...]?** We did not try other distance metrics. We do not expect a change of distance metric to significantly influence the accuracy and robustness because it will impact only a few pixels. On the other hand, their impact on the speed and memory requirements might be more significant since extracting a circular region of the image (as necessitated by the L2-norm) would be more computationally intensive than simply slicing the tensor to extract a square region of the image. We hope that our responses fully address the reviewer's concerns and we kindly request the reviewer to consider increasing their score of our paper. --- Rebuttal Comment 1.1: Comment: Thank you for the response. I was indeed referring to Figure 2 instead of Figure 1. Sorry, for the confusion. My concerns were appropriately addressed and I am raising my score to 7.
Summary: The paper presents a biologically inspired approach that improves the robustness of DNNs against samples with adversarial perturbations or common corruptions. In the proposed approach the models are trained using the images transformed using the proposed R-Blur (Retina Blur) transformation. The proposed R-Blur simulates foveation by adaptively blurring the image pixels and reducing the color saturation based on the distance from the given fixation point. The effectiveness of the proposed approach is demonstrated by considering models trained on CIFAR-10, Ecoset, and ImageNet datasets, respectively. Strengths: The proposed biologically inspired approach is a non-adversarial training approach that yields robust models. These models are robust to unseen adversarial and common corruptions. Experimental results validate the same. Weaknesses: 1. The proposed approach depends on an external model for fixation point generation. The fixation model can act as a computational and performance bottleneck. It is not clear whether the white-box evaluation considers the susceptibility of the fixation model. 2. From the ablation study, it can be seen that noise plays a critical role. The paper fails to highlight the role of noise in the proposed R-blur that simulates foveation. 3. Missing experimental details: Important experimental setup details are missing (such as architecture details, training setup, and attack setting). Furthermore, certain experiments are missing to demonstrate the effectiveness of the proposed approach (computation time, auto attack). Refer to the question section. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Is the proposed R-blur frame differentiable? 2. For the white box adversarial attack, was the R-blur and the fixation model considered for generating the adversarial samples? 3. Is the perturbation size ($\epsilon$) defined for 0-1 or 0-255 pixel range (L178 - L179)? 4. To further validate the adversarial robustness of the proposed approach present sanity check described in [a,b]. 5. How is the pixel wise fusion of color (HxWx3) and the grayscale (HxW) image performed? 6. Numbering for equation below L142 missing. 7. It is not clear why the subset of test images are used for Ecoset and ImageNet. Are the models trained on the entire training set or subset for these cases? 8. The data-augmentation techniques described in L164 to L167 are used only for the proposed approach or for all compared methods? Sensitivity of the proposed approach to fixation method. Training details of DeepGaze-iii are missing (i) dataset and (ii) depth of the ResNet [L192]. 9. Provide results for AutoAttack, apart from APGD attack. 10. In L202, L204, and L254: does "..ResNet.." mean WideResNet-22-4 11. Is the model robust to geometric attacks (e.g. translation, rotation, and affine)? 12. Provide comparison of the training and inference time for the methods considered. [a] Carlini et al. "On Evaluating Adversarial Robustness" arxiv 2019 [b] Athalye et al. "Obfuscated Gradients Give a False Sense of Security" ICML 2018 Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The main limitation of the proposed approach is dependency on the external fixation model which can act as a bottleneck in terms of computation and performance. Further, it is not straightforward to adapt the proposed approach to other computer vision tasks such as segmentation, and multi-object classification. The authors are suggested to include a discussion on the same. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback and hope that our responses address their concerns **[...]The paper fails to highlight the role of noise [...].** We will highlight the role of adding noise in writing. Figure 1 in the global response shows that under moderate adversarial perturbation, the model with only Gaussian noise is almost half as accurate as a model with R-Blur on Imagenet. This result and Figure 8 in the paper show that the noise alone does not explain the robustness gains of R-Blur. **Is [...] R-blur [...] differentiable?** Yes, R-Blur is linear and fully differentiable. **[...] Were R-blur and the fixation model considered for generating the adversarial samples?** The white-box attack takes R-Blur into account: gradients are propagated through it to the input image. We remove the fixation model after using it to sample 5 fixation points from the clean input image, which remain constant in subsequent iterations while the attack is computed. Details in Appendix Section A. **[...] present sanity check described in [a,b].** We performed these checks and found no issues – more attack iterations reduce accuracy, computing the attack over the average logits from several forward passes (expectation over transformation) has no effect on accuracy, converting R-Blur to a straight-through estimator in the backward pass reduces attack effectiveness. (See Appendix Section A) **How is the pixel wise fusion [...] performed?** The grayscale image is HxWx3 with the values replicated on all color channels. The pixelwise fusion is done by taking a weighted sum according to the equation on Ln142 and the following code: `final=(W1*gry_blr+W2*clr_blr)/(W1+W2)` Where gry_blr and clr_blr are the blurred grayscale and colored images, and W1 and W2 are HxWx1 matrices containing the gray and colored acuity estimates at each location. **[...] why the subset of test images [...]. Are the models trained on the entire training set [...]?** The models are trained on the entire training set. They are evaluated on the subset of the test data to speed up experimentation. We randomly shuffle the test set before extracting the subset to eliminate any biases due to dataset organization. **The data-augmentation techniques described in L164 to L167 are used only for the proposed approach or for all compared methods?** The same data augmentation is used to train all the models. **Sensitivity of the proposed approach to fixation method.** The accuracy of R-Blur on clean images _is_ sensitive to the choice of fixation point. As mentioned in Section 5, and in Appendix B, if the optimal fixation point is chosen (by exhaustive search) for each image, the accuracy of R-Blur on clean Imagenet increases from 60% to 70%, which is almost on par with the standard ResNet. Further, the robustness of R-Blur is also sensitive to the _number_ of fixation points (See Figs 7 and 8 in the Appendix). **[...]details of DeepGaze-iii are missing [...].** We will update the paper with the details mentioned below: We used the training code from the deepgaze-iii github repo, and replaced the DenseNet-201 with the R-Warp/R-Blur augmented XResNet-18-2 trained on ImageNet. The ResNet in DeepGaze and the ResNet used for classification share the same parameters. This improves performance and reduces the additional parameters in the final model. Following [41] we train DeepGaze on SALICON (Jiang et. al, 2015). This corresponds to Phase 1 of training mentioned in Table 1 of [41]. We did not notice any benefits in our use case of the additional fine-tuning so we decided to skip Phases 2-4. M. Jiang, S. Huang, J. Duan, Q. Zhao, “SALICON: Saliency in Context”, CVPR’15 **Provide results for AutoAttack [...].** We ran these experiments for the R-Blur augmented model trained on Imagenet and show results in Figure 1 in the global response. AutoAttack reduces the accuracy by < 3% compared to APGD, and thus would not change any of the trends observed in the paper. **Is the model robust to geometric attacks (e.g. translation, rotation, and affine)?** We use RandAugment and AutoAugment to introduce a lot of geometric transformations into the training data therefore _all_ the models exhibit a high degree of invariance to the geometric transformations mentioned above. **[...] comparison of the training and inference time [...].** Table 1 in the global response presents this comparison and shows that R-Blur causes minimal slowdown (1.1x compared to the vanilla ResNet) during both training and testing. Also, increasing the number of fixations slows R-Blur only sub-linearly (5 predefined fixations => 3x slowdown). Introducing dynamic fixation prediction has a greater impact on speed because each image is assigned different fixation points and so R-Blur/R-Warp can not be applied to them as a single batch. This shortcoming is likely common to most fixation transforms, and is not unique to R-Blur. In fact, under dynamic fixation prediction, R-Blur is faster than R-Warp. **[...] fixation model can act as a bottleneck [...].** The fixation model is not a strict dependency. As shown in the ablation study (Figure 8), removing it (and using 5 predefined fixation points) has a minor impact on the accuracy and robustness of the model. For latency-sensitive scenarios, the fixation prediction model may be removed. **[...] not straightforward to adapt [...] to other computer vision tasks [...]** We would appreciate it if the reviewer could elaborate on the potential hurdles they see. In most of the CV tasks a CNN is used to compute an embedding for the image. If only 1 fixation is used then the standard CNN can be swapped with a R-Blur augmented CNN. For multiple fixations, one can get the image embeddings for each fixation point independently and aggregate them by summation, concatenation, etc. We will discuss this in the paper. Some questions couldn’t be answered in the character limit but we can address them in the follow-up. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Please provide the response for the following: 1. Regarding the adaptation of the proposed method to other CV tasks: Can the authors explain how the proposed method can be used for semantic segmentation tasks? Here, the model has to predict labels for each pixel. 2. For the $l_\infty$ attacks, is the perturbation size defined for 0-1 or 0-255 pixel range (L178 - L179)? 3. Provide results for the sanity check experiments described in [a,b]. Consider the CIFAR-10 dataset and $l_\infty$ attack. (i) plot of accuracy vs. perturbation size, (ii) plot of accuracy vs. attack iterations, (iii) black-box attack, and (iv) results for FGSM and PGD attacks. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for responding and raising important questions. We have responded to each question below: **Q1** Several popular semantic segmentation, like DeepLab (Chen, et.al 2018) and FCN (Long, et.al 2015), extract intermediate feature maps from a pretrained deep CNN, which are then further processed by DNNs (usually CNNs) to predict logits for each semantic class at each spatial coordinate. The resulting logit map may be upscaled to match the spatial dimensions of the input image if any downsampling was involved in the earlier steps. In the case of DeepLabv3+, two feature maps are extracted from different layers of a ResNet-101. The map from the earlier layer represents low-level features, while the map from the later layer represents higher-level features. The high-level features are processed with dilated convolutions, while 1x1-convolutions are applied to low-level features. The processed low and high-level features are concatenated channel-wise and passed through a 3x3-conv to predict logits. The logit map is then upscaled to match the image size. If only 1 fixation is used, an R-Blur augmented ResNet can simply replace the vanilla CNN in most semantic segmentation models. For example, the pretrained vanilla ResNet-101 in DeepLabv3+ can simply be replaced with a pretrained R-Blur augmented ResNet-101 without the need for any further modifications. If multiple fixations are used then we would need to make some simple modifications. One option would be to run semantic segmentation and obtain logit maps independently for each fixation point and then average them to get the final logit map. Another option would be to extract low and high-level feature maps independently for each fixation point, then aggregate them by summing, averaging, or concatenation before passing them to the downstream DNN that computes logit maps from them. In either case, the modification is relatively simple and easy to implement. We expect the robustness of R-Blur augmented ResNets to carry over to any semantic segmentation model that uses them, however, verifying this is part of future work. Chen, Liang-Chieh, et al. "Encoder-decoder with atrous separable convolution for semantic image segmentation." Proceedings of the European conference on computer vision (ECCV). 2018. Long, Jonathan, Evan Shelhamer, and Trevor Darrell. "Fully convolutional networks for semantic segmentation." Proceedings of the IEEE conference on computer vision and pattern recognition. 2015. **Q2** It is defined in 0-1. The image pixels are also normalized by 255 to be in 0-1. **Q3** (i) This plot is presented in Figure 5 of the paper and the results are repeated in the following table as well. We see that accuracy decreases with increasing perturbation size. | $\|\|\epsilon\|\|_\infty$ | Accuracy | | ---------------------------- | ------------ | | 0 | 90.4 | | 0.002 | 84.3 | | 0.004 | 77.1 | | 0.008 | 55.4 | (ii) This plot for Imagenet is presented in Figure 1(a) of the appendix included with the supplementary material. Upon the reviewer's request we have repeated these experiments for CIFAR-10 using 100-step APGD with $\|\|\epsilon\|\|_\infty=0.008$ and the results are in the table below. We see that accuracy decreases with increasing number of steps. | Steps | Accuracy | | ------- | ------------- | | 1 | 61.7 | | 5 | 56.1 | | 10 | 55.6 | | 25 | 55.4 | | 100 | 55.4 | (iii) As requested by the reviewer, we evaluated the CIFAR-10 R-Blur model under the back-box Square attack [Andriushchenko et.al 2020] with $\|\|\epsilon\|\|_\infty=0.008$ and observed it achieved 64.9% accuracy. In comparison, the APGD attack was significantly more successful and brought the accuracy of the model down to 56.1% with only 5 iterations. We would also like to point out that a black-box attack is included in AutoAttack, for which we have presented results in Figure 1 of the global response. Andriushchenko, Maksym, et al. "Square attack: a query-efficient black-box adversarial attack via random search." European conference on computer vision. Cham: Springer International Publishing, 2020. (iv) Upon the reviewer's request, we evaluated R-Blur under FGSM attack and compared its accuracy with 100-step APGD in the table below. We see that for each perturbation size APGD is able to achieve lower accuracy than FGSM. | $\|\|\epsilon\|\|_\infty$ | FGSM | 100-step APGD| | ---------------------------- | -------- | ----------------- | | 0.002 | 85.2 | 84.3 | | 0.004 | 78.1 | 77.1 | | 0.008 | 61.5 | 55.4 | All these results indicate that R-Blur does not obfuscate gradients and does genuinely improve adversarial robustness. We hope we have addressed the reviewer's concerns. We would be happy to continue this discussion if further clarification is required. If no further concerns remain, we would like to request the reviewer to consider increasing the rating. Thank You.
Summary: This paper proposes a data augmentation technique named R-Blur to improve the robustness of vision classifiers against adversarial perturbations and other non-adversarial image corruptions. The method is inspired by human visual systems where the perceived scene consists of varying levels of fidelity. As such, the training images are modified in a way that adaptive Gaussian filtering is applied centered around the fixation point in the image. Results on CIFAR-10, Ecoset, and Imagenet demonstrate that R-Blur improves robustness to adversarial perturbations and common corruptions compared to standard trained models. Strengths: Originality: There are only a few biologically inspired adversarial defense techniques. The proposed method mimics the peripheral vision in human visual systems and modifies the training images with an adaptive Gaussian filter. The approach is very unique and interesting. Quality: The paper is well-written. Clarity: The motivation behind the proposed method as well as the overall structure of the paper is clear. The technical details are explained clearly. Significance: The presence of adversarial examples presents a security concern for deep neural networks utilized in various applications. This paper introduces a novel approach to bolster network robustness. Weaknesses: This paper has two main weaknesses. 1. The choice of baseline methods in evaluation. The R-Blur method, in its essence, is a Gaussian data augmentation technique, while the only non-adversarial training technique in the baseline is RandAffine. To concretely verify that the adaptive filtering from R-Blur is indeed improving the robustness of the model beyond simple Gaussian data augmentation, other baseline methods such as Gaussian augmentation (with difference variances) and l2 regularization are necessary. 2. The paper position itself as an approach to improve the adversarial robustness of deep neural networks. However, results in Sec. 3 show that the improvement in adversarial robustness towards the APGD attack is significantly lower than adversarial training. Also, the choice of $\epsilon$ in the adversarial robustness is much smaller than the standard values used in other works. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Question/clarification: Is $W_{V}$ in (1) the width of the image? Are (2), (3), and (4) from previous work ([26]?), or are they part of the novel contributions from this paper? Why are Laplace and Cauchy distributions used? Perhaps some additional discussion following the definitions can be helpful to further improve the clarity. In the evaluation of adversarial robustness, why is only APGD in Autoattack used, rather than the complete version of AutoAttack? In other adversarial training methods, we can explicitly control the trade-off between standard accuracy and robust accuracy. For instance, $\beta$ in TRADES, $\epsilon$ of the perturbation in standard adversarial training. Does such a concept exist for R-Blur? I think understanding such a mechanism can further improve the adversarial robustness of models trained with R-Blur. Suggestion: In Figure 4, it seems that the sequence of fixation points does not converge at all. Also, from Ln182, it seems that the results are based on randomly selected fixation points. One suggestion is to identify the fixation point as the pixel location with the highest saliency. Saliency-based data augmentation (i.e., Ma et al and Uddin et al) can be a good starting point. Be consistent with the use of gray or grey (grayscale or greyscale). Ma, Avery, et al. "SAGE: Saliency-Guided Mixup with Optimal Rearrangements." arXiv preprint arXiv:2211.00113 (2022). Uddin, A. F. M., et al. "Saliencymix: A saliency guided data augmentation strategy for better regularization." arXiv preprint arXiv:2006.01791 (2020). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations including the trade-off between the robust accuracy and the standard accuracy are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We hope that in our responses below we will be able to fully address the reviewer's outstanding concerns and that the reviewer will consider increasing their score of our paper. **verify that the adaptive filtering from R-Blur is indeed improving the robustness of the model beyond simple Gaussian data augmentation** To verify if adaptive blurring in R-Blur does indeed contribute to robustness we compare R-Blur with models that either add Gaussian noise (with the same variance as the noise in R-Blur) or perform non-adaptive Gaussian blurring (with the variance equal to the maximum variance used by R-Blur). The results are presented in Figure 8 and we observe that both of these models are less robust than R-Blur. We have repeated this analysis for Imagenet models ( Figure 1 of the global response) and observed that the R-Blur model consistently achieves higher accuracy than baseline methods on all adversarial perturbation sizes. We also trained models augmented with non-adaptive Gaussian blur of different variances, as well as a model that combines non-adaptive Gaussian blur with Gaussian noise (Figure 4 of the global response), and found R-Blur to have superior robustness then all of them ** improvement in adversarial robustness towards the APGD attack is significantly lower than adversarial training.** We would like to clarify that the objective of this paper was to study the impact of foveation on the robustness of DNNs and we have not claimed that R-Blur is superior to adversarial training as a defense. Our claims are that (1) R-Blur improves the robustness of CNNs to image perturbation – both adversarial and non-adversarial, compared to the vanilla CNN and other biologically-inspired methods, and (2) unlike adversarial training, which seems to have negligible effect on robustness to non-adversarial image perturbations, R-Blur increases robustness to a variety of image perturbations, not just L_p bounded adversarial attacks. We believe that these claims are justified by the results. Nevertheless, we will rewrite certain parts of the paper to ensure that no confusion remains regarding our objectives and claims. **the choice of eps in the adversarial robustness** We sought to validate the claim that R-Blur improves robustness to perturbations – both adversarial and non-adversarial, compared to the vanilla CNN and other biologically-inspired methods. Therefore we chose $\epsilon$ values on which the compared models had accuracy greater than 0%. As shown in Figure 5 in the paper and Figure 1 in the global response, the accuracy of all the models, except AT, on ecoset/imagenet becomes close to 0% on the largest perturbation we used so evaluating at larger perturbation would not have served any purpose. Furthermore, the perturbation sizes ($\epsilon$) used in our evaluation are on par or higher than the sizes used in papers on biologically-plausible adversarial defenses [18,21,22]. **Is W_V in (1) the width of the image?** Yes, W_V is the width of the image – 224. **Are (2), (3), and (4) from previous work ([26]?), [...]?** The equations 2, 3, and 4 are not from previous work [26]. We devised these equations to match the curves of visual acuity presented in Figure 14.21 (B) in [26]. **Why are Laplace and Cauchy distributions used?** The Laplace and Cauchy distributions are chosen to approximate the shape of the curves in Figure 14.21 (B). We will update the paper to make this clearer. **why is only APGD in Autoattack used** We used only APGD because (1) the certified accuracy of R-Blur (Figure 7) was similar to empirically measured accuracy under adversarial perturbations (Figure 5 top row), and (2) our analysis in Section A of the Appendix revealed no gradient obfuscation effects. Under these conditions, we were confident that evaluating with only APGD would give a reliable measure of robustness. At the reviewers’ request, we ran these experiments for the R-Blur augmented model trained on Imagenet. Figure 3 in the global response shows that the accuracy under AutoAttack is only slightly lower than the accuracy under APGD, with the maximum difference being 3%, which would not change any of the trends observed in the paper. **Can the accuracy-robustness tradeoff be controlled for R-Blur?** The accuracy and robustness can be traded with each other by controlling the noise added in R-Blur. Figure 6 in the Appendix illustrates this trade-off. We see that as the variance of the noise is increased from 0.125 to 0.5 the clean accuracy drops from 90% to 85%, while the accuracy under attack increases from around 30% to 50%. **In Figure 4, [...] the sequence of fixation points does not converge** The fixation points in Figure 4 do not converge to a single spatial location by design, as stated in the caption. This is in line with human/animal behavior, where it has been observed that they fixate on different salient parts of the scene to progressively accumulate more information. **from Ln182, it seems that the results are based on randomly selected fixation points.** We would like to point out that the Ln180-182 talk about the training set up. However, as mentioned in Ln185-185, during inference a model of human gaze (DeepGaze-III) is used to select the fixation points. **One suggestion is to identify the fixation point as the pixel location with the highest saliency.** While the reviewer’s suggestion is well taken and we thank the reviewer for sharing relevant papers, we would like to clarify that our approach does in fact subsume this technique. We use DeepGaze-III to predict a saliency map for a given image (see Figure 4 top-row). From this saliency map we pick the most salient coordinate as the fixation point. We repeat this process, as shown in Figure 4, to get a sequence of multiple fixation points. We will rewrite the relevant parts of the paper to ensure that it is clear that we are already using saliency maps to select fixation points. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification. Most of my concerns have been addressed. I will raise my score to 5. --- Reply to Comment 1.1.1: Comment: We are glad that we were able to fully address the reviewer's concerns and we thank the reviewer for increasing their score. Given that 5 is a borderline score, we wanted to ask if there are any lingering questions or concerns due to which the reviewer is currently unable to give a higher score to our paper? We are committed to working with the reviewer to address any issues and improve our paper.
Rebuttal 1: Rebuttal: We are very thankful to the reviewers for taking the time to read our paper carefully and providing valuable feedback that will undoubtedly help strengthen the paper and increase its impact. We also thank the reviewers for asking thoughtful questions and raising important concerns. We have responded to each reviewer’s questions and concerns in separate responses to their respective reviews. Due to space constraints we were unable to respond to comments about editorial issues, like formatting, typos and missing citations. Nevertheless, we would like to assure the reviewers that we acknowledge those comments and we will update the paper accordingly for the camera ready. To reduce ambiguity we answer each question/concern separately, quoting the reviewers' words (in bold font), either verbatim or summarized, before providing our response. If there are instances where our responses do not fully address the reviewers’ concerns or if it appears that we may have misunderstood the reviewers’ intent, we encourage the reviewers to ask follow-up questions and allow us the opportunity better understand their point of view and to provide additional details and clarification. If our responses do adequately address the reviewers’ concerns we request the reviewers to consider increasing their scores. We have also performed some additional experiments and analyses based on requests by the reviewers and have presented the results in the PDF attached to this response. The tables and figures in the PDF are referenced in the responses to the specific questions/comments that requested them. The list of tables and figures in the PDF is also listed below: - Table 1: Computation time of all the methods compared in this paper - Figure 1: Accuracy of R-Blur, baseline methods, adversarial training, non-adaptive Gaussian blur, and Gaussian noise at different levels of adversarial perturbation. - Figure 2: A corrected version of Figure 2 from the paper. - Figure 3: Accuracy of R-Blur under APGD and AutoAttack at different levels of adversarial perturbation. - Figure 4: Comparison of accuracy under clear images and adversarial attacks between R-Blur and models with Gaussian blurring of difference variances. Notes: - We refer to the Appendix in our responses below. This appendix is included in the supplementary material submitted with the paper. Pdf: /pdf/a73d3136e45c135b12391027a9c085c0cec48f0d.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
What Do Deep Saliency Models Learn about Visual Attention?
Accept (poster)
Summary: This paper proposes a new framework, which can decompose the learned features of a saliency model into trainable bases (using [42]), those bases are combined to formulate the final saliency map, and the weight of the combination indicates the contribution of each basis. The semantic meaning of each basis can be explored by matching the bases to a probe dataset (Visual Genome is used in this draft), Thus, saliency can be studied based on visual concepts, .e.g, different types of visual relationship, or objects, exploring what the key factors are behind the saliency region. Strengths: This draft is well written, the organization is clear, and the method is explained in details. The proposed method attempts to solve a fundamental and important question, what features are important to saliency. Weaknesses: I like this study attempts to solve a very interesting question, however, some of the designs may need more discussion. This study is based on the existing basis de-composition[42] technique. The main contribution is to link those bases to other visual concepts by computing thresholded IOU. This design heavily relies on the probing dataset, which makes the hyper-parameter threshold a little tricky. What if other datasets are used, will the bases also match other concepts, how one can validate the matched concept (objects or events) is valid. Some studies attempt to explore the learned feature based on known probing concepts, e.g., designing a dataset specifically for texture and shape concepts(Geirhos, Robert, et al. 2018). Exploring visual concepts (bases) by matching to other datasets is not very convincing. Should one exhaustively match to all concepts to find the best match (I highly doubt if this is feasible or how to define the universe), or setting a threshold and matching to a “local”’ solution(a specific dataset)? Thus, matching bases to a specific dataset is not very solid to show what the bases are. More generally, assuming we can find the “ground-truth” (the true meaning of those bases), I believe saliency is way more complicated than simply based on visual concepts. From Figure 2, we can see that street sign is less important than faces. We can imagine the street sign could be the most salient region if the picture only contains a street sign in the middle. Given two images, one has a face in the middle and one has a street sign in the middle, they could be equally salient. Thus, one explanation for figure 2 could also be it shows the bias of the dataset which used to train the model, more faces appeared in that dataset. There also could be biases in the probe dataset, Visual Genome in this study. One concept, say “having meeting”, is more salient could be the visual features happen to match one of the bases. The real most salient basis does not exist in that dataset. Another problem in this study is that saliency may not depend on the semantic. Imagine we have ten faces in a picture, the middle could be the most salient, while the others are not. Thus, study which visual basis is the most salient is not convincing. This may also happen to the Visual Genome dataset. The region for the visual relation is the union of the head and the tail, which is larger than the two objects. The bigger union region may better correlate with the well-known centre bias regardless of what the actual content. This is why “action” is the most salient in all of the datasets in Figure 3. Overall, applying basis decomposition [42] is not new, the finding of the semantic importance given saliency may simply showing dataset biases. Geirhos, Robert, et al. "ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness." arXiv preprint arXiv:1811.12231 (2018). Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See above Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Exploring unknown bases to a dataset is questionable, which requires further study. The finding of those bases may not explain saliency, but the biases of the training and probing datasets. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. Q**: What if other datasets are used, will the bases also match other concepts? Should one exhaustively match all concepts to find the best match? **R**: We appreciate the thoughtful questions raised about the potential limitations of matching bases to a single dataset. We agree that the matching of bases and concepts may depend on specific datasets or thresholds used, and it is challenging to define or exhaust all concepts in the universe. Our study follows the common practice of leveraging big datasets with key concepts at scale [42, 43]. In particular, we use Visual Genome, the biggest and most diverse naturalistic dataset with a sufficiently large number of images (100,000 images) and fine-grained annotations, providing a rich and diverse set of objects, scenes, attributes, relationships, and contexts. Our approach automatically captures salient and non-salient semantics (Figure 2) that agree with previous studies [19, 40, 41], as well as discriminative attention patterns [49-52] in various contexts (e.g., for different participant groups, stimuli, and time durations in Figure 6). These findings validate the effectiveness of our method in drawing key insights into saliency. **2. Q**: I believe saliency is way more complicated than simply based on visual concepts. Given two images, one has a face in the middle and one has a street sign in the middle, they could be equally salient. **R**: Saliency suggests relative importance and thus the respective capability in attracting attention when multiple objects or semantics co-occur in a scene. We agree with the reviewer that it is dependent on other factors such as the context of an image and the locations of the objects. We also agree that with a face in the middle and a street sign in the middle, both are salient in their individual contexts. It is in fact difficult to understand the relative importance of objects or semantics with iconic images with one or few dominant objects in the center, and this is indeed the reason that we leverage datasets with images that (1) have multiple objects co-occurring in images, and (2) include diverse objects in naturalistic context; so the confounder effects such as center bias are neutralized to some degree, and statistical conclusions of their importance can be derived. Therefore, instead of looking at saliency in an individual image, our framework derives conclusions from a more global perspective from a large set of naturalistic images with various objects and contexts. **3. Q**: One explanation for Figure 2 could also be it shows the bias of the dataset which used to train the model, more faces appeared in that dataset. **R**: Our method uses the average IoU score across diverse images annotated with a specific concept to measure its importance in Equation 5, which accounts for the different frequencies of concepts. Therefore, higher frequencies of a concept do not necessarily result in higher importance. For instance, the "mouth" has an equivalent frequency to "face" but is assigned much lower importance, the “floor” has fifteen times more frequency than “cloudless” but their importance shows the opposite. **4. Q**: The finding of those bases may not explain saliency, but the biases of the training and probing datasets. **R**: We acknowledge that both the training dataset and the probing dataset may introduce biases in the saliency analysis. However, the large-scale Visual Genome dataset mitigates the risk of biases from limited data samples present in smaller-scale studies. This approach helps in understanding the general strategies employed by deep saliency models for capturing attention across diverse visual contexts. As highlighted in Section 4.5, our method results in meaningful results that are consistent with human vision studies [49-52], validating its effectiveness in explaining saliency instead of biases. **5. Q**: The region for the visual relation is the union of the head and the tail, which is larger than the two objects. The bigger union region may better correlate with the well-known center bias regardless of the actual content. **R**: Our approach takes advantage of two key strategies to counter center biases and the size of semantics, and thus ensure a faithful measurement of the alignment between the probabilistic distribution of a semantic and the segmentation of different semantics. First, size-insensitive measurement. We follow the general methodology of [43] and measure the alignment with the IoU score and adaptive thresholding (i.e., thresholds that capture the top 20% quantile level of distributions for different bases). This method ensures that the alignment score will be high only if the probabilistic distribution of the basis aligns with the majority of the segmentation mask, regardless of the average size of the semantics. For instance, on average, a "train" is over seven times larger than a "face", but our approach correctly recognizes the significantly higher importance of "face" in Figure 2. Second, the choice of dataset. Unlike many image datasets (e.g., ImageNet and MIT) focusing on iconic images where few objects are centered, Visual Genome used in our study is designed to encompass a rich set of semantic concepts in the same scene. Due to a more balanced distribution of visual features, the most salient object is not always at the center of the scene. Therefore, this design alleviates center biases. --- Rebuttal Comment 1.1: Title: feedback Comment: Thanks the authors for the further explanation, however, not all of the questions are answered in detail. 1. " Visual Genome, the biggest and most diverse naturalistic dataset with a sufficiently large number of images". I looked into the VG dataset, which is also very skewed, 90% relations belong to the categories of possessive(has part of) and geometric(above behind). More importantly, "people" and "vehicles" are also dominant in the objects. Thus, it is difficult to say the VG dataset is "large" (or "fair"?) enough to test saliency. Moreover, it is unknown to me how to define a dataset is large enough for this purpose. This also explains the use of the adaptive threshold. The true unknown concept does not exist, so a lower threshold is used to select the second or third best match. 2.b "include diverse objects in naturalistic context; so the confounder effects such as center bias are neutralized to some degree,". I looked in to the VG data, I cannot see how the diverse objects can solve the centre bias effect. In contrast, the definition in visual scene graph could worsen this problem. The visual feature of relation, "having meeting", the union region of two objects correlates more with centre bias, which explains why the relationship is more salient, instead of reducing the center bias effect. 3. Thanks for the explanation, this could partially answer the frequency in the probing dataset. Could you please add a little more (maybe one sentence about this frequency normalization Z), is this new in this study? I might miss it in [43]. 4. Again, it is still unclear how to define VG is large enough for this purpose, and the relation (union region) definitely correlates more with the saliency region due to the center bias. 5. a) This size-intensive measure Eq.4 cannot handle the center bias problem. Imagining that most saliency appears more in the center of an image, the union region also has this tendency comparing with the head and tail objects. Thus, Eq4 could easily capture relation (union region) than single objects regardless of what threshold is applied. b). VG dataset is not balanced though it has a lot of categories. More importantly, the VG data also has centre bias due to the human bias, and the annotation (relation) is created by human where relation around RoIs(center) is labeled. Thus, both measures are not used for the position bias. --- Reply to Comment 1.1.1: Title: Response to feedback (part 1) Comment: We appreciate the reviewer for comments and exchange of ideas. Center bias, imbalance categories, and dataset selection are indeed important considerations for not only our research but the whole community. Please find below our elaborations on these issues: **1. Q**: How do diverse objects solve the center bias effect? 90% relations belong to the categories of possessive(has part of) and geometric(above behind). Does the relation (union region) definitely correlate more with the saliency region due to the center bias? **A**: We appreciate the discussions about center bias. We agree with the reviewer that center bias naturally exists in the datasets due to human bias (e.g., rules of composition in photography). While the majority of studies or methods live with such bias, we are aware of it and identified ways to counter unwanted bias, from both dataset selection and method design perspectives as elaborated below: **Dataset selection**: compared with conventional iconic images with one or few dominant objects in the center, some later datasets highlight multiple salient objects in a naturalistic context, which reduces center bias. For example, the OSIE dataset [17] was proposed to counter center bias by carefully selecting stimuli with a large number of salient off-center objects, demonstrating a much smaller center bias with popular datasets like the MIT saliency dataset [16]. To demonstrate the relatively small center bias on the Visual Genome dataset, we report the key indicators evaluating center bias in comparison with OSIE. | | Number of Objects Per Images | Object Distance to Center| |----------|----------|----------| | OSIE | $ 7.93 \pm 3.95$ | $0.3 \pm 0.13$ | | Visual Genome| $16.43 \pm 8.21$ | $0.3 \pm 0.14$ | **Method development**: We would like to clarify several efforts to counter center bias in our method: (1) Our analyses are based on objects and their attributes (including action-related attributes) obtained from Visual Genome. We do not consider object relations in our pool of semantics, and thus our analytic framework is not affected by the effects of object unions. (2) Our method leverages the IoU metric to measure the alignment between probabilistic distributions of a basis and the segmentation of different semantics, thus interpreting the semantic meaning of the basis. The metric is independent of the position of semantics, and counters the effect of size by normalizing with the union between probabilistic distributions and segmentation of semantics. As a result, it is insensitive to the larger semantics appearing in the center. Our further analyses confirm that the semantic importance has weak correlations with distance to the image center (**Pearson’s r=-0.05**) or object size (**Pearson’s r=-0.11**), which underscores that large objects or union regions near the center do not necessarily imply higher importance. **2. Q**: VG dataset is very skewed (imbalanced), though it has a lot of categories. **A**: We agree with the observation regarding the skewed distribution. Like center bias, imbalance categories exist in most naturalistic datasets. We would like to add several observations: (1) On the positive side, the skewed distribution reflects the frequencies of objects or semantics occurring in real life, offering opportunities to develop insights and models for the wild. (2) Recent studies (e.g., [42, 43]) have commonly leveraged such large skewed naturalistic datasets as probing datasets, and provided valuable insights into the behaviors of deep networks. (3) We follow [43] and incorporate adaptive thresholding to maintain the stability of relative ordering among semantics, regardless of their size or frequency. Furthermore, we incorporate an averaging process that divides the alignment scores $O$ in Eq.4 by the number of images, effectively countering the imbalanced distribution. Such a paradigm enables us to take into account the naturalistic contexts without overfitting to data biases. **3. Q**: Could you please add a little more on the normalization Z, is this new in this study? I might miss it in [43]. **A**: The reviewer is right that Z is new in this study. [43] interprets a deep layer for image classification and does not consider normalization. Differently, our study aims to unveil the nuanced relationship between semantics and visual saliency, and it for the first time quantifies the contributions of semantics with deep saliency models. Besides considering Z that normalizes the contribution of the semantics to the range of [-1, 1] (for semantics with positive/negative contributions, Z denotes the maximal/minimum contribution among all semantics), the numerator part of Eq. 5 is also new, which quantifies the contribution of semantics to saliency by incorporating the semantic meanings of bases and their corresponding weights learned in deep saliency models. We will incorporate the details in the revision.
Summary: This paper examines the problem of predicting visual saliency in images. Unlike many other works, it focuses on determining what leads to the predictions made, including underlying features that are learned, and formulating the prediction as a combination of bases. Through this, one is able to garner an understanding of how the models make their decisions and attach this to semantic information or other concepts. Strengths: Strengths of this paper are as follows: 1. It approaches saliency from a different point of view; rather than competing to marginally beat an ROC score, it takes a step towards truly understanding the success of deep learning models in this domain 2. By explicitly modelling features correlated with attention, and those inversely correlated with attention, one can model both what is salient and what is not in a way that allows these concepts to be quantified 3. Because the model constructs it's prediction from a set of bases, this effectively allows human attention to be modelled subject to a change to these bases, as demonstrated in an example involving autism. Weaknesses: Ultimately, it would be nice to see some quantitative results on how well the model performs compared to some of the state-of-the-art models, although I understand this is not the point of the paper. Nevertheless, a detailed set of results along this dimension could add value but I wouldn't insist on it. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. If possible, can you comment on how this formalism for saliency prediction extends to other domains? 2. In figure 5, it seems that a single epoch of fine tuning is closer to careful fine tuning than the original weights. Is this from using a relatively large learning rate? 3. Related to the above, how does this vary with changes to the learning rate? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have addressed limitations of their approach in Section 5 of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. Q**: It would be nice to see some quantitative results on how well the model performs. **R**: We thank the reviewer for the suggestion and include the results in the global comment, which shows that our model achieves state-of-the-art performance. **2. Q**: Can you comment on how this formalism for saliency prediction extends to other domains? **R**: The key components of our framework, such as feature factorization and probabilistic inference, are general and adaptable, which can be readily extended to other domains and applications. For instance, our method can be directly applied to similar regression tasks in other domains, e.g., visual aesthetic estimation, to gain insights into the contributions of different semantics to the predicted scores. It can also be extended to classification tasks such as image classification, and help identify the common/distinct semantics for different classes. For this, we can adjust our paradigm based on the general methodology discussed in [42], and interpret the relationship between object classes and fine-grained concepts by iteratively analyzing the contributions of semantics for predicting each class. **3. Q**: It seems that a single epoch of fine-tuning is closer to careful fine-tuning than the original weights. Is this from using a relatively large learning rate? How do the results vary with the learning rate? **R**: In general, the shift of semantic contributions in saliency prediction models during fine-tuning is larger in the first epoch compared to the subsequent epochs. This is because the pre-trained ImageNet features provide a good initialization of model weights for saliency models to converge quickly. The learning rate used in our experiments is 1e-4, following the learning recipe of the original paper [11]. We also experimented with a learning rate of 4e-4 and observed similar convergence behaviors and results.
Summary: This paper attempts to decompose the learned representation of a data-driven saliency model into a constituent set of bases that are mapped onto semantic concepts, thereby providing insight into what is driving the model's representation of saliency. This method is applied to three different saliency models of varying formulation over several datasets. Some discussion is then provided of the results, as well as some qualitative discussion of examples of model failures. Strengths: The paper takes on a challenging and open-ended problem in saliency modelling, namely the difficulty of teasing apart the different contributions to attentional capture. Similarly, the work provides an example of continued work in general explainability within deep learning methods, which is an important issue within the field. The paper is clearly written and relatively easy to follow, although there are a few details that are missing (such as the choice of N). Weaknesses: There are some references that seem pertinent that were not discussed. In particular, significant work on failure modes of modern saliency models was poorly represented. - Bruce et al., "A Deeper Look at Saliency: Feature Contrast, Semantics, and Beyond", CVPR 2016 -- This paper digs into some of the failure modes common to deep learning models, including object vs. background and semantic vs. feature contrast elements (e.g. see Figure 7), providing pertinent insights to the discussion in the submission. - Kümmerer et al., "Understanding Low- and High-Level Contributions to Fixation Prediction", ICCV 2017 -- This paper explicitly explores constituent elements of saliency representation in deep networks from the perspective of high-level vs. low-level features. Given the way the paper attempts to tease apart the representation of saliency between different feature classes, it is conceptually highly relevant background for the current submission. - Kotseruba et al., "Do Saliency Models Detect Odd-One-Out Targets? New Datasets and Evaluations", BMVC 2019 -- This paper provides a psychophysical (P^3) and natural image (O^3) dataset with targets explicitly defined by low-level salient features (e.g. colour, orientation, shape, or size singletons), and finds that saliency models (including deep learning-based models) largely perform quite poorly. For exploring failure rates the O^3 dataset would be a potentially useful (albeit ground truth was defined by semantic object annotation and not fixation data), but even if the dataset is not used the examination of model failures in the submission should include the context of this prior exploration. - Tatler et al., "Visual correlates of fixation selection: effects of scale and time", Vision Research, 2005 -- This paper explores the evolution of fixations through time, including aspects such as central vs. peripheral distribution and inter-subject consistency of fixation location. Given that this is one area that the submission claims novelty, it would be good to put it in context with prior explorations in this area of the temporal evolution of low-level human attention. Overall, while the paper is interesting and tackles an exceedingly challenging problem, I think there are a number of conceptual issues that it needs to overcome. While some specific issues are given in the questions below, the primary issue is that while the submission encodes the positive/negative importance of the various bases extracted, it is well established within psychophysics that the relative importance of elements to saliency is contextual (e.g. see Nothdurft (1993) "Saliency effects across dimensions in visual search"; Nothdurft (2000) "Salience from feature contrast: additivity across dimensions" for some low level examples), and so these relative attributes are likely to change from image to image. Within the current submission, these attributes change from model to model and dataset to dataset; what conclusions are to be drawn from this? Is the technique shedding light on dataset composition, model bias, or some deeper aspect of relative aspects of saliency? Much of the analysis is presented without clear connection to either human behaviour (with the exception of Section 4.5, which cleverly makes use of the technique to explore the representations learned from data from different human subject populations or conditions) or model performance in a traditional sense, making it difficult to put into context or derive deeper insight. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Although drawing on the architectures of SALICON, DINet, and TranSalNet, the model configurations in this paper are distinct from the instantiations of the original publications in order to accommodate the need for trainable bases. How does that change the ultimate behaviour of the models with respect to standard measures of saliency performance? Alternatively, even a quantified value for the change in saliency maps when compared within model (e.g. the paper's version of SALICON correlated against the standard instantiation of SALICON) would help put the paper's results in context with the existing literature. - What is the value of N (the number of bases)? How stable are the results with respect to N (i.e. does the mapped semantic content change substantially with even small changes in N)? - Each basis is mapped onto a top-5 semantic mix. Why not onto a single concept? How was 5 selected? Similar to the previous question, how does this choice affect the subsequent analysis? - How is "action" defined in a static image? The primary example given, "having meeting", seems like a social activity, which was a separate category. I get that it is challenging to relate the messy details of semantic categorization in a short paper, but given that this is central to the topic of the paper, I think it needs a clearer explanation. - How do the insights provided in this paper relate to model performance? When the models show a markedly different breakdown of salient factors (e.g. Figure 4, which shows SALICON emphasizing vehicles much more strongly than TranSalNet, while TranSalNet emphasizes clothing more than any other model), does this correlate with predictive accuracy? - Related to the previous question, could you use the IoU Measurement process used to assign labels to the bases to provide an approximate breakdown of the factors leading to human fixations directly? This might provide another point of comparison to better put the results of this paper in context. - Given the range of behaviours across the models shown in Figure 4, why does Figure 5 average the semantic weights across models (also, this should be noted in the caption; when I first read the paper I was quite confused which model was being shown in Figure 5)? What is the justification for this? Do the models tend to converge after fine-tuning? - Figure 6 (b.) and (c.): are these results for subjects without autism? If so, this should be more clearly noted. - What was meant by line 304: "Whether models... open question."? What would lead to a model behaving "even better than humans", given that humans are the system trying to be modelled? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The limitations discussed seem clear. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. Q**: The relative importance of elements to saliency is contextual. These attributes change from model to model and dataset to dataset; what conclusions are to be drawn from this? Is the technique shedding light on dataset composition, model bias, or some deeper aspect of relative aspects of saliency? **R**: We agree that the relative importance of elements to saliency is indeed contextual, as demonstrated in the referred psychophysics research on low-level features (Nothdurft, 1993 and 2000) as well as recent ones including higher-level semantics [17, 19, 41]. Instead of investigating individual images, this study aims to reveal general conclusions from learning-based methods based on a large-scale dataset featuring diverse objects in natural contexts. For example, a small far-away pedestrian face in a street image can be less salient than a nearby car, but with many images with faces of different sizes and locations co-occurring with various objects in natural contexts, faces show overall high saliency values. This allows us to make statistical inferences regarding the relative significance of semantics, quantifying and visualizing their contributions to saliency prediction. It further enables a collection of interesting analyses and important conclusions including (1) differentiating the positive and negative contributions of semantics is critical to saliency prediction; (2) deep saliency models learn key properties of attention in different settings (different participant groups, stimuli, and time durations), mostly aligned with findings in human vision studies; and (3) common failures of deep saliency models can be attributed to the inability to differentiate semantics. The reviewer is right that “the technique is shedding light on dataset composition, model bias…”. While we highlight the general conclusions, the proposed framework can also be applied to different data or models to reveal certain data- or model-specific conclusions or biases, so readers may be aware of the differences and select them accordingly. For example, Section 4.3 analyzes the same model trained on different datasets and reveals the data characteristics based on semantic weights. **2. Q**: Some references were not discussed. **R**: The references are indeed relevant. Our work complements them by differentiating high-level semantics based on their contributions and understanding the general mechanisms of deep saliency models. We will include these papers and add a dedicated section to discuss more comprehensively about failure modes, saliency representation, and temporal evolution in saliency models. **3. Q**: How does the method change the saliency performance? **R**: We include the results in the global comment, showing that our method does not compromise saliency performance. **4. Q**: What is the value of N? **R**: We set N based on the number of units in the final layers of deep saliency models [9-11]. We explored two settings of N (512 and 1000) and found that their differences in contributions were not substantial. Therefore, we proceed with N=1000, which strikes a balance between granularity and computational efficiency. **5. Q**: Why not project bases onto a single concept? **R**: This approach resonates with [42] that uses the top k concepts for interpreting image classification. The top-1 semantic alone may only explain ~30% of the alignment score (sum of normalized IoU). Alternatively, the top-5 semantics account for 90-100% of the score, capturing a broader range of contributing semantics while avoiding an overemphasis on dominant salient/non-salient semantics (e.g., face and cloudness). **6. Q**: How are semantic categories like action defined? **R**: Our categorization of semantics considers both their association with objects and meanings. For categories related to actions, we first identify human-centric social semantics (i.e., objects and attributes related to humans), and then distinguish actions (e.g. doing something) from non-actions (e.g., body parts). We also offer detailed results for each semantic in the supplementary materials. **7. Q**: How do the insights relate to model performance? **R**: The paper sheds light on how deep saliency models prioritize visual cues to predict saliency, which is crucial for interpreting model behavior and gaining insights into visual elements most influential in performance: For example, Figure 5 analyzes the evolution of semantic weights through fine-tuning where performance increases with epochs. The fine-tuning improves the weight difference between salient and non-salient semantics, leading to enhanced model performance. **8. Q**: Could you use the IoU Measurement directly for factorization? **R**: Directly using IoU for factorization and employing the cosine similarity $\alpha$ as a measure of contribution (which is a form of machine attention) may not quantify the actual contributions of bases (different attention weights lead to the same prediction, see “Attention is not Explanation, 2019”). We address the issue by reformulating saliency prediction with a probabilistic framework, to explicitly measure the contributions and reveal deeper insights into saliency models. **10. Q**: Are the results in Fig. 6bc for subjects without autism? **R**: The two subfigures analyze the effects of stimuli and time duration on the observers’ attention, which are unrelated to autism. We will provide clearer explanations to avoid misunderstanding. **11. Q**: What was meant by line 304? **R**: When inter-subject variability is high, human observers do not have a consensus on where to look. In this case, one assumption about saliency modeling (i.e., certain commonality about human attention patterns) may not be true, and the validity of using the ground truth human map for training and evaluation (i.e., the standard leave-one-subject-out approach), and the expected behavior of targeted models are interesting and open questions.
Summary: The paper presents a novel analytic framework that provides a principled interpretation and quantification of the implicit features learned by deep saliency models, which are used for predicting human visual attention. The framework decomposes these features into interpretable bases aligned with semantic attributes, and reformulates saliency prediction as a weighted combination of probability maps. The authors conducted extensive analyses to understand the factors contributing to the success of saliency models, including the positive and negative weights of semantics, the impact of training data and architectural designs, and the effects of fine-tuning. They also explored visual attention in various application scenarios, such as autism spectrum disorder, emotion-eliciting stimuli, and attention evolution over time. The study identifies the accurate feature detection and differentiation of semantics as key factors in the models' success, influenced by training data and design choices. The framework is also useful for characterizing human visual attention and understanding common failure patterns in saliency models. The authors suggest incorporating structures and lower-level information for improved modeling. The research has potential impacts on optimizing human-computer interfaces, assisting visually impaired individuals, and enhancing societal benefits. Strengths: 1. The paper presents a novel analytic framework that provides interpretation and quantification of the implicit features learned by deep saliency models for predicting human visual attention. The framework decomposes implicit features into interpretable bases aligned with semantic attributes, allowing for a weighted combination of probability maps connecting the bases and saliency in saliency prediction. The framework effectively identifies a variety of semantics learned by deep saliency models, including social cues, actions, clothing, and salient object categories, showcasing its versatility in analyzing attention across diverse scenarios. 2. The framework reveals the positive and negative contributions of semantics to saliency, highlighting the ability of deep saliency models to distinguish between salient and non-salient semantics. 3. The analysis demonstrates how training data and model designs impact saliency prediction, with shifts in semantic weights reflecting the characteristics of the datasets and models. 4. The study also investigates the effects of fine-tuning on semantic weights, showing how deep saliency models progressively adapt features during training to better capture salient cues and refine the weights of negative semantics. 5. The framework is also applied to explore the capture of human attention characteristics, such as the impact of visual preferences, characteristics of visual stimuli (e.g., emotions), and temporal dynamics, providing insights into the factors influencing attention deployment. 6. The findings validate the effectiveness of deep saliency models in automatically identifying salient semantics, differentiating foreground from background, and encoding fine-grained characteristics of attention. Weaknesses: The work has a lot of novelty with good empirical analyses. The paper is also nicely written. However, I find that the authors have missed comparing the performance of their model with other saliency models using standard saliency metrics as can be found in this leaderboard: http://saliency.mit.edu/results_mit300.html. The authors have also failed to cite some important works from like DeepGaze, DeepFix, BMS, etc. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I am curious to know if the authors could perform and share the results of comparison with existing methods using the standard metrics for measuring saliency. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors analyzed the failure patterns of deep saliency models within the intermediate inference process using their proposed factorization framework. They select common success and failure examples where three tested models consistently have high/low NSS scores. They then perform a qualitative analysis by visualizing the spatial probabilistic distribution of the bases for semantics with positive and negative weights. In the successful examples, they find that accurate saliency prediction is correlated with the differentiation of diverse semantics. The stimuli in these examples typically have salient and non-salient regions belonging to different semantics. Therefore, the models, with the ability to distinguish positive and negative semantics, can readily determine the saliency distribution. On the other hand, in the failure examples, the models struggle to determine saliency within objects or among objects with similar semantics. Investigation of the probabilistic distribution of bases reveals that models often have a uniform-like distribution of bases on object parts or among objects of the same category, making it difficult to construct accurate saliency maps. The analysis also highlights that existing models have difficulty with scenes without salient objects. These scenes are challenging for the models as they lack clear focal points. The ground truth human attention in these failure patterns exhibits high inter-subject variability, suggesting that human viewers may not agree on where to look, making the ground truth maps less reliable. The authors raise the question of whether models may perform as well as or even better than humans in these challenging situations. Based on their observations, the authors hypothesize that leveraging more structured representations to encode contextual relationships between semantics and integrating mid- and low-level cues may be beneficial in addressing these failure patterns in certain scenarios. This suggests that incorporating additional information and incorporating contextual understanding may help improve the performance of deep saliency models. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. Q**: The work has a lot of novelty with good empirical analyses, but has missed comparing the performance of their model. **R**: We acknowledge the importance of performance comparison with existing methods using standard metrics for measuring saliency, and have added comparisons in the global response. Results show that our approach is able to achieve competitive performance among three popular datasets. We are committed to including these comparisons and results in the revised version of the paper. **2. Q**: Cite some important works from like DeepGaze, DeepFix, BMS, etc. **R**: We appreciate the suggestion to highlight relevant works. These references will certainly enrich our discussion on diverse saliency techniques and we will add them in the revision. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' efforts to respond to all concerns raised by the reviewers. I would like to keep my rating.
Rebuttal 1: Rebuttal: We thank the reviewers for their insightful feedback. We are encouraged that they recognize our work as solving a fundamental, important, challenging, and open-ended problem, which approaches explainability and takes a step towards truly understanding the success of deep learning models (ZMfD, srbe, UKcT). We appreciate that they find our work with a lot of novelty and good empirical analysis, on multiple aspects including both salient and non-salient semantics, data and model impact, and the exploration of human attention characteristics (u4m5, ZMfD, srbe, UKcT). All reviewers identify the importance of feature decomposition and explicit modeling of the connections between bases and semantics, indeed two key components of our framework for bridging the gap between visual semantics and saliency. We are also glad that they consider our paper nicely written (u4m5, ZMfD, UKcT). The key objective of our study is to develop a principled framework for interpreting and understanding the underlying mechanism behind deep saliency models, rather than designing saliency models that strive for enhanced performance. As pointed out by Reviewer srbe, instead of competing to marginally beat an ROC score, it approaches saliency from a different point of view and works toward a true understanding of deep saliency models. With this objective in mind, by explicitly quantifying the contributions of diverse semantics, our approach provides key insights into the impacts of different factors (e.g., datasets, model architecture, learning process in Section 4.2-4.4) on saliency prediction and attention deployments in various settings (e.g., different participant groups, stimuli with diverse sentiment, and varying temporal dynamics in Section 4.5) and also complements studies on model design [9, 10, 11] with an interpretable tool for investigating the behaviors of models (e.g., Section 4.6). Although not the focused point of the paper as acknowledged by the reviewers, our approach also shows promise in achieving competitive performance in saliency prediction. To complement our analyses in the main paper and further substantiate the efficacy of our methodology in accurately interpreting saliency models without altering their inherent behaviors, we follow reviewers' suggestions and include an additional performance comparison. The table below shows the comparative performance across three commonly used datasets: OSIE, MIT, and SALICON (our model is trained using the SALICON training split with a DINet backbone). These results demonstrate the competitive nature of our approach compared with state-of-the-art techniques. It is noteworthy that our approach introduces minimal architectural modifications, limited to the last two layers of the saliency models (see Section 4.1 for details), thereby ensuring that its performance aligns seamlessly with the original DINet model across all datasets. | | | CC | NSS | |----------|----------|----------|----------| | OSIE | SALICON| 0.63 | 2.75 | | | SAM | 0.65 | 2.70 | | | DINet| 0.63 | 2.88 | | | Ours (DINet) | 0.64 | 2.91 | | MIT | DVA| 0.64 | 2.38 | | | SALICON | 0.70 | 2.56 | | | SAM| 0.69 | 2.47 | | | DINet | 0.70 | 2.54 | | | Ours (DINet) | 0.70 | 2.53 | | SALICON | DeepNet| 0.86 | 1.61 | | | SAM | 0.86 | 1.84 | | | SALICON | 0.86 | 1.89 | | | UNISAL | 0.88 | 1.95 | | | EML-Net | 0.87 | 1.95 | | | DINet | 0.87 | 1.92 | | | Ours (DINet) | 0.86 | 1.89 | We address the questions (**Q**) raised by each reviewer in the individual responses (**R**) below, and will incorporate the comments in the revision.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
PromptCoT: Align Prompt Distribution via Adapted Chain of Thought
Reject
Summary: This paper introduces PromptCoT, an enhancer that automatically refines text prompts for diffusion-based generative models, improving their capability to produce high-quality visual content. The system is based on the idea that prompts that resemble high-quality image descriptions from the training set lead to better generation performance. Pre-trained Large Language Models (LLM) are fine-tuned using a dataset of such high-quality descriptions, allowing them to generate improved prompts. To mitigate the tendency of LLMs to generate irrelevant information, contamination, transfer, and even a Chain-of-Thought (CoT) mechanism are used to improve alignment between original and refined prompts. Additionally, to maintain computational efficiency, the system employs adapters for dataset-specific adaptations, leveraging a shared pre-trained LLM. When tested on popular latent diffusion models for image and video generation, PromptCoT showed significant performance improvements. Strengths: 1) This paper shows an insight from the observation that prompts resembling high-quality image descriptions from the training set lead to better generation performance. It makes sense from my perspective. Current text-guided image generation is still with limited generalization ability. The points near the training support samples are transferred better. 2) The idea to utilize LLM to adapt the original prompt to the one that is more aligned with training samples makes sense. It leverages the ability of LLMs to align the distributions. 3) Three training methods are proposed to implement the ideas including the continuation, revision, and CoT. 4) This paper builds the corresponding datasets to help the fine-tuning, which can benefit the following research. 5) The method using the GPT-3 to build datasets is smart. Weaknesses: 1) The finetuning of LLaMa is time-consuming. Could it be replaced by prompt learning or LORA? 2) Please add a discussion (e.g. in related work) with ''Visual Chain-of-Thought Diffusion Models'', though these two methods are clearly different. 3) If the datasets with neural captions can be built. How about adding these sentences to the training set of the diffusion model? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Please refer to the weaknesses part. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: It can be found in the F part of the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for participating in the review of our work and providing meticulous reviews.2 Regarding your questions, we strive to provide as comprehensive answers as possible. ## Q1: Replace finetuning by prompt lerning or LoRA. Indeed, the fine-tuning process for full-parameter LLaMA can be time-intensive if done individually for each distinct task, rendering it inefficient and impractical. In pursuit of streamlined multi-task adaptation, we have embraced the employment of an adapter-based architecture known as LoRA. Experimental results demonstrate that LoRA-based adapters achieve comparable performance. | booster | aes-score | CLIP-score| |:-:|:-:|:-:| |t2t-blip-30b | 5.90 | 0.282| |GPT-4| 5.91| 0.280| |cot-30b| 5.93| 0.274| Also provided in pdf file tabel 1. ## Q2: Add a discussion with ``Visual Chain-of-Thought Diffusion Models''. We have thoroughly examined this study, in which the authors enhance the expressive capability of the text condition by generating a new CLIP embedding through a pre-trained diffusion model. This enhancement leads to an improvement in the quality of generated images. It's worth noting that this work is quite intriguing, showcasing methods in bridging text and image generation. We will certainly cite and discuss the work in our main paper. In contrast, our approach begins with the textual prompt and explicitly focuses on optimizing the alignment between the prompt and the training dataset. By employing the CoT (Chain-of-Thought) method, we achieve a more effective fusion of text continuation and revision abilities. Our approach not only enhances the visual quality of the images but also ensures consistency between the textual and visual concept. ## Q3: Add neural captions to the training set of the diffusion model. Our core objective in the study was to enhance the capabilities of the generation model while maintaining its inherent structure. This was accomplished through prompt alignment, rather than resorting to modifications in the training dataset or pursuing further fine-tuning. The rationale behind utilizing automatic captioners to construct the dataset was to establish pairs of expressions, with one resembling human-like expressions and the other mirroring training-like expressions. In this context, the outputs generated by the automatic captioners serve as representations of human-like expressions. It enables us to utilize LLM (Large Language Model) as an aligner, facilitating the conversion of human-like expressions into training-like expressions. --- Rebuttal Comment 1.1: Comment: I appreciate the authors taking the time to answer the questions during the rebuttal. I also read the comments of other reviewers. I still think this is good work with a clear contribution. Thus, I keep my positive score.
Summary: In this manuscript, the authors proposed a simple yet effective framework to improve the generation quality of pretrained generative models. Generally, to align the prompt distribution with large language models, the authors present three individual solutions to align and enhance the original textual inputs, i.e., providing a compelling text continuation with given initial inputs, revising the initial inputs, and using chain-of-thought to enhance the initial inputs by 5 steps. Moreover, the authors also introduce multi-task adaptation to improve the quality of generated textual desctiptions for each dataset. The experimental results demonstrate that the generative models with PromptCoT can achieve better qualitative performance than the baseline method. Strengths: 1. The proposed PromptCoT is easy to follow. 2. The qualitative results is promising. Weaknesses: 1. Introducing prompting technique to improve the quality of the textual inputs for generative models lacks novelty, since intuitively, providing more detailed information indeed improves the generation quality of the generative models. The authors could clarify the novelty, e.g., how the prompting technique or the CoT technique different from other works, or why the generated text is better than human-refined counterparts. 2. As shown in Figure 4, the proposed text revision technique may generate some uncontrollable additional information (e.g., time, weather and place). I think the prompted results may be influenced by the inherent bias of the language model or the prompting dataset. The authors could provide some analysis to show that the enhanced textual inputs are unbiased with given CoT technique, or show that the enhanced textual inputs are controllable. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See the weakness for detail. If the authors could address my conerns 1 and 2, I will raise my rating. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors have stated the limitation of the proposed PromptCoT, i.e., relying on the capability of the generative models and the quality of inital textual inputs. These limitations can be seen as the future direction of this work. Flag For Ethics Review: ['Ethics review needed: Inadequate Data and Algorithm Evaluation'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for participating in the review of our work and providing meticulous reviews. Regarding your questions, we strive to provide as comprehensive answers as possible. ## Q1: Intuitively, providing details improves the generation quality. Clarify the novelty. Adding details to the text directly for performance enhancement is uncontrollable. Our "tcontinue" experiment serves as the most direct evidence. By adding details without altering the original content, the improvements it brings, as observed from various evaluation metrics in Tab.2 in the main paper, remain quite limited. We extensively surveyed numerous relevant works and found that our approach possesses a significant level of uniqueness. To illustrate the novelty of adopting the CoT method, we provide the following comparisons. * **[1]** employs Large Language Models (LLMs) to align with manually optimized prompts, and subsequently integrating reinforcement learning techniques to enhance performance. * **[2]** utilizes Large Language Models (LLMs) to generate an additional set of conditional embeddings, which collaboratively interact with text embeddings to enhance performance. * **[3]** explores the usage of models from the GPT series to refine prompts. However, our experiments have demonstrated that the CoT approach significantly outperforms GPT-3.5 and is on par with GPT-4 in terms of effectiveness. * **[4]** empirically summarizes various methods for optimizing prompts. None of published studies formally discuss the problem. Our PromptCoT features a tailored 5-step CoT design that takes into account the trade-off between introducing new details and maintaining visual-textual consistency, which not only stands out as distinctive but has also proven successful. In contrast to human refinement, our approach provides enhanced stability. This is attributed to the reliance of human refinement on intuition and experience, which introduces a greater degree of randomness in the introduced words. For instance, in the pursuit of clearer images, the inclusion of terms such as "4k," "8k," and "high-resolution" seems logical, but they might not constitute the most optimal selections. Conversely, our methodology concentrates on identifying the optimal refinements, with the goal of maximizing the diffusion model's generation capabilities by achieving optimal alignment within the combined prompt. In practice, we conducted experiments shown in Tab.5 in the supplementary material, which revealed that manual refinement did not demonstrate strong performance. On the contrary, our method proved to be more effective. ## Q2: Analyze the enhanced prompts are unbiased or controllable. According to the design of the CoT pipeline, during the refinement process, we extract key information from the original prompt. In the expansion phase, we continue to expand based on the key information, thereby filtering out irrelevant details and ensuring semantic consistency. Besides, in pdf file, we provide some examples which show that CoT aligner can better control refined prompts by alleviating missing crucial information and extending irrelevant information. We evaluate that the refined prompts do not deviate through cross-CLIPScore between origin prompt and image generated by refined prompt, between refined prompt and image generated by origin prompt. |booster| CLIP-score origin prompt| CLIP-score refined prompt| |:-:|:-:|:-:| |N/A| 0.26| 0.26| |t2t-blip-30b| 0.23| 0.21| |cot-30b| 0.24|0.25| Also provided in pdf file table 2. ## Reference **[1]** Hao, Yaru, et al. "Optimizing prompts for text-to-image generation." arXiv preprint arXiv:2212.09611 (2022). **[2]** Zhong, Shanshan, et al. "Sur-adapter: Enhancing text-to-image pre-trained diffusion models with large language models." arXiv preprint arXiv:2305.05189 (2023). **[3]** Zhu, Wanrong, et al. "Collaborative Generative AI: Integrating GPT-k for Efficient Editing in Text-to-Image Generation." arXiv preprint arXiv:2305.11317 (2023). **[4]** Oppenlaender, Jonas. "A taxonomy of prompt modifiers for text-to-image generation." arXiv preprint arXiv:2204.13988 2 (2022). --- Rebuttal Comment 1.1: Comment: Thanks for your response. Most of my concern has been solved. I will raise my rating.
Summary: This paper aims to improve the images generated by the off-the-shelf diffusion model, such as Stable Diffusion. This is done by fine-tuning a large language model (LLaMA) using text continuation on more high-quality prompts, which are collected by hand-crafted rules on, for example high CLIP similarity and text length. Furthermore, off-the-shelf image caption models (like BLIP) are also used to curate high-quality prompts. While the proposed method seems effective, the lack of technical novelty is a concern of this work. Strengths: 1. The overall presentation is good, and the paper is simple to follow. 2. The visualization is very interesting. For example, Figure 1 is very impressive. 3. The overall pipeline is well encapsulated in Figure2-4. It is simple for the reader to understand the high-level idea of the work by looking at the figures. Weaknesses: Major 1. Novelty is a big concern in this work and there is no technical contribution in this work. The author basically uses many off-the-shelf techniques to augment/modify the input text. Text-continuation is also commonly used for pretraining large LLM. The adaptation techniques in section 3.4 are also not novel and the author simply leverages the existing techniques. In general, it is unclear what is the technical contribution of this work. 2. L258 Is there any reason why t2t-blip booster demonstrates the best performance? Any theoretical reason behind this? Minor 1. L224 gpt to “GPT” Technical Quality: 3 good Clarity: 3 good Questions for Authors: Overall, there is no big drawback in this work. However, there is not much technical contribution in this work as well. This work might be useful for industrial applications, but its publication might not benefit the community. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Limitation is discussed in appendix Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for participating in the review of our work and providing meticulous reviews. Regarding your questions, we strive to provide as comprehensive answers as possible. ## Q1: Technical contribution is unclear. Expanding the text input does not guarantee a definite improvement in performance. Our investigation has unveiled a notable insight: even when individuals possessing academic expertise engage in the process of refinement, it can paradoxically result in a reduction in overall performance (as depicted in Table 5 in the supplementary materials). Conversely, certain automated methodologies, as elaborated in our paper, provide evidence that the potential for performance enhancement through prompt refinement remains largely untapped. Our motivation lies in the pursuit of identifying the most optimal refinement approach. Through an in-depth analysis presented in our paper, we have discovered a pronounced correlation between the refined prompts and the distribution of textual content within the training dataset. Then we fine-tune the LLM on the textual data extracted from the training set of the pre-trained diffusion model. Leveraging the model's inherent tendency to overfit, we ingeniously harnessed its capacity to automatically discern and select the most fitting synonymous word candidates from a myriad of options. These choices are tailored to harmonize seamlessly with the underlying pre-trained SD model. This motivation diverges fundamentally from the conventional application of LLM in direct prompt refinement, embodying a unique and distinctive approach. Furthermore, we extensively surveyed numerous relevant works and found that our approach possesses a significant level of uniqueness. To illustrate the novelty of adopting the CoT method, we provide the following comparisons. * **[1]** employs Large Language Models (LLMs) to align with manually optimized prompts, and subsequently integrating reinforcement learning techniques to enhance performance. * **[2]** utilizes Large Language Models (LLMs) to generate an additional set of conditional embeddings, which collaboratively interact with text embeddings to enhance performance. * **[3]** explores the usage of models from the GPT series to refine prompts. However, our experiments have demonstrated that the CoT approach significantly outperforms GPT-3.5 and is on par with GPT-4 in terms of effectiveness. * **[4]** empirically summarizes various methods for optimizing prompts. None of published studies formally discuss the problem. Our PromptCoT features a tailored 5-step CoT design that takes into account the trade-off between introducing new details and maintaining visual-textual consistency, which not only stands out as distinctive but has also proven successful. ## Q2: Explain why t2t-blip booster demonstrates the best performance. Both t2t-blip and t2t-inter utilize the same revision approach; however, t2t-blip exhibits superior performance compared to t2t-inter. Our analysis revealed that the captions generated by blip demonstrated greater consistency with the text descriptions in our validation set (COCO). This enhanced the ability of the LLM to effectively learn the alignment between COCO and LAION, thereby resulting in superior performance. To harness the benefits of both continuation and revision advantages within the CoT method, we employed a more robust pre-trained model, llama-30B, which yielded the most promising outcomes in our latest experimental evaluations. | booster | aes-score | CLIP-score| |:-:|:-:|:-:| |t2t-blip-30b | 5.90 | 0.282| |GPT-4| 5.91| 0.280| |cot-30b| 5.93| 0.274| Also provided in pdf file tabel 1. ## Q3: L224 gpt to “GPT” Thank you for the correction. We will carefully review the revision. ## Reference **[1]** Hao, Yaru, et al. "Optimizing prompts for text-to-image generation." arXiv preprint arXiv:2212.09611 (2022). **[2]** Zhong, Shanshan, et al. "Sur-adapter: Enhancing text-to-image pre-trained diffusion models with large language models." arXiv preprint arXiv:2305.05189 (2023). **[3]** Zhu, Wanrong, et al. "Collaborative Generative AI: Integrating GPT-k for Efficient Editing in Text-to-Image Generation." arXiv preprint arXiv:2305.11317 (2023). **[4]** Oppenlaender, Jonas. "A taxonomy of prompt modifiers for text-to-image generation." arXiv preprint arXiv:2204.13988 2 (2022). --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: The rebuttal has addressed my concerns and the score is raised.
Summary: The main motivation of this paper is to better align prompts to the textual information of high-quality images within the training set. The authors propose datasets, instruction templates and use CoT to finetune LLM to achieve this goal. Adapters are also use adapters to facilitate dataset-specific adaptation. Strengths: Application of LLMs to image generation for better text alignment is reasonable. The authors prepared datasets and conducted curated pipeline to achieve this goal. The presentation is clear. Weaknesses: 1. the observation that prompts aligned with high-quality images withing the training set are more inclined to yield visually superior outputs needs more demonstration. The authors verified this on LAION dataset. But it does not mean this is always the truth on all datasets. Whether it works on Flickr, Pinterest images? 2. The application of LLMs to refine texts is widely used now. Prompt engineering, Cot, adapters are also widely explored. Only the Application of these techniques seems not novel enough. 3. The pipeline may needs more explanation on why we need text continuation first then text revision. 4. the experiments are quite confusing. what are t2t-inter, cot_d, cot? the scores in Table 2 shows the effectiveness of each booster. How do they work together? Which booster works the best according to these experiments. what are Table 3 and Table 4 used for? It's really hard to summarize a conclusion that can match the design of the whole method. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: questions are listed in weakness. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: limitations are presented in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for participating in the review of our work and providing meticulous reviews. Regarding your questions, we strive to provide as comprehensive answers as possible. ## Q1: Evaluate effectiveness of CoT aligner datasets besides LAION. The most prominent open-source generative models, such as SD, VQ-diffusion, Imagen, etc., have achieved remarkable performance through training on the LAION dataset. Our objective is to refine the textual prompt to align to the training text, without altering the generative model itself. Consequently, we utilize the most popular training data of the leading generative models. Furthermore, we have extended our validation to text-to-video models in the main paper, which yielded results affirming the effectiveness of our aligner. This substantiates not only the efficacy of our aligner approach but also its capability to generalize across different modalities. ## Q2: The application of LLMs to refine texts is widely used now. Prompt engineering, Cot, adapters are also widely explored. Only the Application of these techniques seems not novel enough. Conventional wisdom held that expanding input text was believed to enhance performance in text generation. However, our investigation challenges this notion, revealing that involving academic experts in prompt refinement may paradoxically diminish overall performance (illustrated in Table 5 in the supplementary materials). Our approach demonstrates that the latent potential for performance improvement through prompt refinement remains untapped. We uncover a profound correlation between finely honed prompts and the distribution of textual nuances within the training dataset. Our methodology strategically fine-tunes LLM, empowering them to select fitting synonyms and seamlessly integrate them into the output. This approach diverges from conventional prompt enhancement methods. Our primary motivation is to identify the most optimal approach for prompt refinement. In our paper, we present a comprehensive analysis that reveals a strong correlation between refined prompts and the distribution of textual content within the training dataset. To achieve this, we employ a fine-tuning process on the LLM using the textual data extracted from the training set of the pre-trained diffusion model. Leveraging the LLM's inherent tendency to overfit, we creatively utilize its capacity to automatically discern and select the most suitable synonymous word candidates from a wide range of options. These carefully chosen words are seamlessly integrated into the underlying pre-trained SD model, resulting in a harmonious combination. It is important to note that our motivation significantly deviates from the conventional approach of using LLM for direct prompt refinement, making our methodology unique and distinctive. ## Q3: The pipeline may needs more explanation on why we need text continuation first then text revision. In our experiments, we separately employed two methods, revision and continuation. From the results of text refinement, we found that continuation can bring both useful and irrelevant information, while revision can greatly improve alignment. However, revision also runs the risk of altering the main content and omitting important information. Therefore, we constructed the CoT to complement the strengths and weaknesses of each method. In addition to the sequential order of performing continuation before revision in our CoT approach, we have also carefully designed methods for extracting the main content and adding relevant details. This ensures that the modified text not only contains rich and nuanced information but also avoids ambiguity or conflicting interpretations compared to the original text. ## Q4: Results in table is not explained clearly enough. We have provided descriptions of the aligners in both the main paper (line 240-248) and the supplementary materials (line 67-73). We will incorporate additional descriptions in the revised version. Here we provide a brief overview of the roles of each aligner: * t-continue: represents the continuation method. * t2t-blip: represents the rewriting method with BLIP incorporation for captioning during the training process. * t2t-inter: represents the rewriting method with the inclusion of clip-interrogator for captioning during training. * CoT_d: refers to the approach that exclusively employs the CoT dataset for fine-tuning. * CoT: Signifies the aligner that employs all five types of datasets as outlined in Figure 5 in the main paper for fine-tuning. These aligners function separately. During the initial phases of our investigation, we conducted separate explorations of the continuation and revision methodologies. Substantive enhancements in performance were discerned with each respective approach, accompanied by discernible variances in outcomes. Consequently, the prospect of harnessing the strengths of both approaches through a CoT was entertained. Subsequent empirical investigations involving both CoT_d and CoT substantiated that the direct revision methodology yielded markedly subpar outcomes in contrast to the incremental CoT procedure. In order to harness the full potential of the CoT methodology, we took a progressive step by incorporating a substantially larger-scale Language Model (LLM) with an expansive parameter count of 30 billion. This augmentation was accompanied by the implementation of the QLoRA technique, which facilitated efficient fine-tuning operations. The outcome of this strategic augmentation culminated in the development of our CoT_30B aligner, which notably exhibited unparalleled levels of performance within the context of our study. --- Rebuttal Comment 1.1: Comment: the rebuttal addressed some of my concerns. I agree that the the prompt refinement method in this paper is different from previous works. I raised my rating. But I still think it lacks technical contribution. The combination of off-the-shell techniques is more like a good, successful practice to utilize LLM for better image generation on specific models in industry, rather than a novel technique contribution. --- Rebuttal 2: Comment: Hi Reviewer 2oA6, Would you please respond to the Authors' rebuttal? Thanks! Best,
Rebuttal 1: Rebuttal: # To ALL We sincerely thank all reviewers for your comments. After reading all theomments carefully, we add more corresponding results and image examples to te rebuttal PDF file. We hope this rebuttal can address your concerns. If you have other concerns, we will reply as soon as possible. Pdf: /pdf/b89fad1620832da685a5733c26ec715a7c01316c.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Double Auctions with Two-sided Bandit Feedback
Accept (poster)
Summary: This paper studies double auctions, where a set of $N$ buyers interacts with a set of $M$ sellers to trade some goods. This is a fundamental problem in economics that has been studied extensively. The specific mechanism that is studied in this paper is the average mechanism; it sorts the bids of the sellers and the buyers, identifies the best $k$ so that exactly $k$ trades can happen, and then posts as price the average of the bids of the kth seller and the kth buyer. Formally, the authors study the following repeated setting: at each time $t$, each agent submits a bid, the average mechanism is implemented, and all the agents participating in a trade are revealed a noisy sample of their valuation. This paper aims to design a learning protocol that exhibits a sublinear regret with respect to the outcome of the average mechanism when all the agents declare their actual valuations. Note that in the model studied, the agents do not know their valuations but only learn it by participating in the auction and receiving bandit feedback. The learning protocol proposed by the authors is simple: each agent maintains a (scaled) confidence interval of their valuation, the sellers bid according to LCB, and the buyers to UCB. This protocol yields $O(\log T /\Delta)$ regret regarding social welfare, which is tight. From the agent's perspective, the protocol exhibits an $O(\sqrt T)$ regret for agents belonging to the optimal solution and an $O(\log T/\Delta) $ regret for the others. These results are shown to be tight up to poly-log T terms. Strengths: - The study of economic problems from an online learning perspective is an active and fruitful line of research, and many works have appeared at NeuriPS and ICML - The regret results are tight, and the algorithm is natural and elegant. - The fact that the authors studied social welfare (i.e., total happiness) and individual utility (i.e., individual happiness) offers a compelling argument for implementing this mechanism in real life. - Given the space constraints, the authors did a good job in presenting and motivating the problem and highlighting the crucial steps of the analysis Weaknesses: Minor comments: - I do not find that surprising the fact that sellers want to bid according to their LCB - Please spend some extra words on the definition of social welfare used in the paper. In the economic literature, social welfare is defined as the sum of the agents' utilities at the end of the trade. Thus, it also contains the sellers' valuations that retain their goods. The authors call social welfare the gain from trade, i.e., the variation in social welfare after and before the trade. As pointed out by [10], these two notions are equivalent when minimizing regret (due to its additive nature). Please clarify this point. - It makes little sense to have an experimental paragraph in the main body and all the experimental results in the appendix. Either add some plots in the main body or defer everything to the appendix. - The feedback in [10] is not bandit-like: they receive only ordinal information about the price posted and the agents' valuations. This is not enough to reconstruct the gain from trade (so it is strongly less informative than bandit feedback). Technical Quality: 3 good Clarity: 3 good Questions for Authors: None Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See above Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Social welfare definition:** We thank the reviewer for this pointer and will add the distinction from the classical definition of social welfare and what we consider. Further while defining social welfare, we will write a sentence crediting [10] for establishing the equivalence of the classical definition and what we consider while studying for regret minimization. **Weaker feedback in [10]:** We thank the reviewer for identifying this gap in our related work attribution. We will update the revision in Line 366 by adding a statement that – “*The work of [10] considers the single buyer and seller model under a much weaker ordinal feedback model, while in the present work we consider the multi-agent model under a stronger bandit feedback model. The feedback model in [10] is more restrictive since the gain from the trade cannot be estimated by the agents based on ordinal feedback while in our model, the gain from the trade can be estimated by each agent. However, our work considers the impact of multi-agent competition on regret minimization, which is not studied in [10].*” **Experimental plots in the main body:** We will follow the reviewer’s suggestion and use the extra page for the camera ready to move regret plots from the Appendix to the main body of the paper.
Summary: This paper studies double auction markets where both buyers and sellers receive bandit feedback. There are $N$ buyers and $M$ sellers trading a single type of item in the market during a time horizon $T$. They don't know their own valuations so they need to learn through repeated interactions. The auctioneer implements the average price mechanism at each round. All buyers use UCB algorithms while all seller use LCB algorithms. The benchmark is defined to be the ideal market where true valuations are known and all players bid truthfully. The authors derive regret upper bounds for social welfare and individual utility. They also give regret lower bounds in the minimax sense. Strengths: 1. To the best of my knowledge, this is the first paper that considers repeated double auctions with unknown valuations. Unlike in the standard repeated-auction setting, one need to deal with two-sided uncertainty in repeated double auctions. The regret analysis requires non-trivial techniques. 2. The analysis in the paper is limited to a specific variant of double auction mechanism, but the authors have discussed incentives and deviations, and have complemented the theoretical results with simulation study for completeness. I believe these results can inspire future work on a more general setting. 3. The paper is well written and organized. I particularly like the comments after the main results, which contribute to helping readers better perceive the regret bounds in theorems. The proof flow in the appendix is also very good. Weaknesses: 1. The proof techniques may not be easily applied to more general settings, i.e., when agents don't follow UCB-type algorithms, when the auctioneer don't use the average price mechanism, or where true valuations are time-variant, sampled from some distribution. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Line 148, $n_{s,j}(t)$ -> $n_{b,i}(t)$. 2. How to understand these regret bounds if there are ties. For instance, if $B_{K^*} = B_{K^*+1}$, will the regret on social welfare be arbitrarily bad? What if $\Delta=0$? 3. I personally suggest making the proof outline more concise, so that the main body of the paper can have space to put some results of simulation experiments. 4. Line 615. Does it mean $\beta\geq 2$ is enough for proving Regret=$O(\log T)$? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors acknowledge that the study of incentive compatibility is out of scope for this paper. However, they have given enough discussion on incentives in Appendix C. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable comments, and highlighting insightful future directions. Please find our response below. **General Applicability of Proof Techniques:** In bandit literature, handling stochastic reward and time-variant reward requires different algorithmic and technical ideas. We acknowledge this, and leave handling time-variant ture valuation open. We will add this to our future work discussions. Our proof techniques of Social regret relies only on convergence to the correct set of buyers and sellers, and can be adapted to other mechanisms with a reasonable effort. However, our techniques require substantive modifications to handle individual regret in other mechanisms as price discovery changes drastically with mechanism. We leave studying general double auction mechanisms with bandit feedback as a future work (see line 389-390 in the paper). **Adding Plots in the Main Body:** Thank you for this suggestion. If accepted, we will utilize the additional page in the camera ready to add simulation plots in the main paper. **$\Delta = 0$ is not feasible:** $\Delta = 0$ (where $\Delta$ is defined in line 182) implies $p^* = S_{K^*} = B_{K^*}$. However, Double auction only matches a buyer with bid higher than a seller. So if $K^*$-th buyer and $K^*$-th seller participate we must have $S_{K^*} < B_{K^*}$. Therefore, we always have $\Delta > 0$. **Ties in Valuation:** $B_{K^*} = B_{K^*+1}$ is an interesting setting. Understanding this behavior formally is out of scope. We provide an informal discussion here. In the case where $B_{K^*} = B_{K^*+1}$ and $S_{K^* + 1} > B_{K^*}$, i.e. there are $(K+1)$ buyers and $K$ sellers. A tie-breaking needs to be employed. We assume the tie-breaking happens randomly with equal probability. Under bandit feedback the UCB of $(K^*+1)$-th and $K^*$-th buyer will both be higher than LCB bid from the $K^*$-th seller with high probability. As exactly one of the two buyers match every round the social regret will remain mostly unchanged orderwise in N, M, K and T. The individual regret analysis is more complicated. We conjecture the number of times $K^*$-th and $(K^* + 1)$-th buyer match satisfy, $1/polylog(T) \leq n_{K^*+1}(T) / n_{K^*}(T) \leq polylog(T)$. Otherwise, the less matched buyer will have a higher UCB whp. Thus due to fluctuations in the UCB of these two buyers they will incur $polylog(T)$ regret. However, this is orderwise negligible to the $\sqrt{T}$ individual regret from the price estimation error for these two buyers. A formal treatment requires considering *all possible ties*, specifying *tie-breaking rules*, and utilizing arguments similar to the above to give regret guarantees. Therefore, we will leave studying *'Ties in valuation'* open, and add it to our *Conclusion and Future Work* section. **$\beta > 2$ suffices:** Thank you for pointing this out. Yes $\beta > 2$ suffices for our regret upper bounds, with increased constant term. E.g. Theorem 1 will hold with $MNb_{max} \zeta(\beta / 2)$ instead of $MNb_{max} \pi^2/6$, where $\zeta(x)$ is the Riemann zeta function which is finite for $x > 1$. We will update this in the final version. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. I don't have any further questions.
Summary: The paper studies an online learning problem related to double auctions. In particular, the paper assumes that the auctioneer uses a average price mechanism. The goal is to study the online learning problem faced by sellers and buyers that do not know they own valuations. The authors design a algorithm based on upper confidence bounds that if applied by all the sellers and buyers guarantees low regret both for each individual and socially. In particular, they show that the social regret is upperbounded by $O(log T/\Delta)$, where $\Delta$ is the minimum price gap, while the individual regret is at most $O(\sqrt T)$. Strengths: The setting is interesting and of practical relevance. The proposed algorithm is easy to implement and guarantees good regret bounds. Weaknesses: The algorithmic approach is straghtforward and does not introduce substantial new ideas. I am confused by your choice to use instance dependent regret bound in some cases (depending on $\Delta$) and some instance independent regret bounds $\sqrt T$. I don’t see any conceptual difference between the case in which you apply instance dependent and instance independent bound. For instance, your Lemma 21 (that proves the instance dependent bound) employs arbitrary small difference between the buyer and seller valuations. Hence, it does not rule out logarithm instance dependent bound depending on $\Delta$. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please clarify why your are using both instance dependent and instance independent regret bounds. Why do you handle individual regret and social regret in different ways? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. Regarding instance dependent versus instance independent bounds** ***All our upper bounds in Theorems 1 and 2 are instance dependent***, including the $\sqrt{T}$ for participating agents. Theorem 1 bounds the social-welfare regret as a function of $\Delta$ and is thus instance dependent. In theorem 2, the regret upper bounds for both the participating buyers(sellers) and the non-participating buyers(sellers) depend on instance dependent terms such as $K^*$ and $\Delta$. Observe the $\Delta$ dependence in the second order $\log(T)$ term for the participating buyers and sellers’ regret. Thus, our regret upper bounds are instance dependent bounds. *We propose to add a statement and highlight that all our upper bounds are instance dependent in the revised version.* ***$\sqrt{T}$ Instance dependent Individual Regret:*** Among the instance dependent upper bounds, the regret for the participating buyers and sellers is the non-standard term of $\sqrt{T}$. This is unlike $\log(T)$ instance dependent bounds in typical multi-armed-bandits. At a high-level, a $\sqrt{T}$ term for the regret shows up for participating agents because of having to do *price-discovery* which is a continuous variable. To solidify this intuition, consider estimating the mean of a continuous random variable using $t$ i.i.d. Samples. It is known (for ex. Using Central Limit Theorem arguments) that the estimated mean will be about $\mathcal{O}\left(\frac{1}{\sqrt{t}}\right)$ away from the true mean. In the price-discovery setting, this error in estimating the true valuation will accumulate over time to a $\sum_{t=1}^T\mathcal{O} \left(\frac{1}{\sqrt{t}}\right) = \mathcal{O} \left(\sqrt{T}\right)$ regret term. Moreover, the system needs to eliminate suboptimal *matching* by playing them $O(log(T))$ times. This reflects in the $O(log(T))$ components of the individual regrets. ***$O(\log(T))$ Instance dependent Social Regret:*** In the social welfare regret, the *price discovery* does not play any role, because in each round the prices cancel out in the social welfare. Only sub-optimal *matching* used in each round contributes to the regret leading to decomposition in line 258. $O(log(T))$ number of play of each of the finite sub-optimal matchings suffices to separate the optimal matching, creating $O(log(T))$ social welfare regret. ***Our lower bound on social welfare regret is instance dependent:*** The lower bound given in Appendix B.7 on the social welfare is instance dependent and order-wise captures the upper bound. We are able to give instance dependent lower bounds by establishing a coupling between our system and a combinatorial semi-bandit system in Proposition 22. ***Our Lower bound for individual regret (Lemma 21) is instance independent:*** As the reviewer correctly notices, our lower bound for price-discovery in Lemma 21 is instance independent. Note we already mention this as a *minimax bound*. We do not have an instance dependent lower bound for price discovery in our work. Establishing an instance dependent lower bound is a challenging open problem as it requires us to formalize the mean-estimation intuition to obtain a $\sqrt{T}$ type instance dependent lower bound. Doing so requires circumventing several technical challenges such as estimation from adaptively collected samples through a bandit policy. The correlations among the samples are further complicated by the impact of market competition where a sample for the valuation is received only when an agent is matched which in turn is dependent on the other agents’ perceived valuation. These technical challenges render an instance dependent lower bound for the pricing beyond the scope of the present work. *We propose to make this explanation clearer in the revision and will highlight that the only instance independent regret bound is the individual regret. We will also pose as an open problem/future work of obtaining a $\sqrt{T}$ type instance dependent lower bound on individual regret.* **2. Regarding algorithmic simplicity and technical contributions** ***We view the simplicity of our algorithm as a strength:*** We note that our algorithm (sellers bid LCB, and buyers bid UCB) attains optimal social regret, and sub-linear individual regrets. Our key contribution is to show under the presence of two-sided uncertainty of double auction markets, simple UCB-LCB based algorithm can succeed. The fact that a simple algorithm can attain this performance is a strength and contribution of the paper. Thus, we respectfully push-back on the opinion that simplicity is a weakness. ***Technical Challenges and contributions in the analysis:*** We want to remark that despite the algorithm being simple, its analysis is novel and does not follow existing works in multi-armed bandits. The key challenge in the analysis is that the information across agents are different resulting in heterogeneity, as we explain in Section 4.1. Thus, even though the algorithm appears simple, *new ideas are needed for the multi-agent analysis.* --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response that clarifies most of my doubts. I've update my score accordingly. I encourage the author to better highlight that the regret for the participants depends also on $\Delta$. For instance, I find Table 1 misleading. The dependence from $\Delta$ should appear also in the $\sqrt T$ regret bound. Lemma 21 is not very meaningful in the context of you work. If I understand correctly, your upper bound on the class of instances used in the proof is arbitrary large! Again, Table 1 suggests that you have an almost matching lower bound for participants but this is not the case. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for a through and constructive feedback that is helping our draft! We will add the discussion from this rebuttal and in particular explicitly include $\Delta$ into Table 1. We will also highlight the limitation of our results in Table 1 that for individual regret, our upper bound is instance dependent while the lower bound (Lemma 21) is instance independent. In the conclusion section, we will list deriving instance dependent lower bounds for individual regret as an open problem.
Summary: This work considers learning in a two-sided double auction setting in which both sides must confront uncertainty over their valuations (which are realized upon winning in the auction) and choose to adhere to the same protocol in order to perform that learning. The work shows that buyers and sellers bidding less aggressively - e.g., buyers bidding higher than they otherwise might, and sellers bidding lower than they otherwise might - leads to faster learning than the usual alternative (Optimism in the Face of Uncertainty). One note is that strategic behavior and any Nash equilibrium will be a change in the other direction, and involve bidders bidding more aggressively (buyers lower / sellers higher). Using this strategy, the paper develops bounds on social welfare and individual participant regret. It is mentioned in the appendix that individuals may have incentives to deviate, but in the context of robustness of results, strategic behavior is not considered. Strengths: The work makes an interesting connection between double auctions and bandit learning, and would seem to be the first to tackle learning on both sides of the market. Weaknesses: The recommended strategy is quite simple - bidding higher for buyers or lower for sellers, which results in more trade happening than e.g. bidding at expectation or bidding more aggressively, and it is this encouragement of trade that drives results. The authors note that if both sides bid too aggressively then it can slow down trade and hence slow down learning. Incentives from best responding however will push buyers and sellers in the other direction. The protocol model assumes coordination between buyers and sellers on the algorithm to use in order to bid, which could in a real world setting encourage collusion among participants and/or platform disintermediation. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Double auctions are ubiquitous as mentioned in the discussion. Which many-to-many double auction bandit settings do you view as the right motivation for this work? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: No - discussions of collusion or platform disintermediation would be beneficial for the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments. **Which many-to-many double auction bandit settings is the motivation?** Online learning in economic markets is an active area of research evident from the abundance of research works in the past few years. This is also acknowledged by *Reviewer BEVv*. Our work contributes to this area by initiating the study of double auction. We draw motivation from multiple practical applications, e-commerce, bidding in wireless spectrum, cloud computing markets to name a few. Please see line 38 - line 47 for details. Our objective here is not to model one particular application in detail, but to develop a framework to study repeated double auctions under bandit learning. As a stylized example consider a specific task, say *image labeling*, in a *decentralized crowdsourcing marketplace*. There are multiple *labelers (sell-side)*, each having her own valuation of labeling a batch of images determined by the time taken to label. Here time taken for labeling is unknown, stochastic, and different for each labeler. The labeling task is accomplished with comparable accuracy by each labeler. There are multiple *companies (buy-side)* that want to get a batch of images labeled in each interaction. Each company has its own valuation of a batch of labeled images at the accuracy of the pool of labeler under consideration. However, the accuracy and hence the valuation of each company is a-priori unknown. The companies and labelers can transact -- labelers are paid by the companies for a batch of labeled images -- through a repeated double auction market. The accuracy of labeled images by a company, and time taken for labeling by a labeler can be learned only through *image-labeling (bandit feedback)*. Our work captures this scenario. **We view the simplicity of our algorithm as a strength:** We note that our algorithm (sellers bid LCB, and buyers bid UCB) attains optimal social regret, and sub-linear individual regrets. Our key contribution is to show under the presence of two-sided uncertainty of double auction markets, simple UCB-LCB based algorithm can succeed. The fact that a simple algorithm can attain this performance is a strength and contribution of the paper. Thus, we respectfully push-back on the opinion that simplicity is a weakness. **Incentives from best responding however will push buyers and sellers in the other direction:** We note that indiscriminately reducing the bids for a buyer, and increasing the bid for a seller is not always best. Indeed, too low a bid from buyer or too high a bid from seller can leave them unmatched leading to high regret. Note that for the true-participant sellers and buyers who benefit from trade the UCB and LCB decays roughly as $O(1/\sqrt{t})$ at time $t$. So for these buyers and sellers, the bids converge to their true valuation. For true-non-participating buyers and sellers there is no gain from trade. UCB-LCB based bidding happens in the vicinity of true valuation while ensuring their participation in trade is vanishing (only $O(\log(T))$ in $T$ rounds). Therefore, UCB-LCB bidding is as incentivized as bidding their respective true valuation for all agents (quantified by the individual regrets). We discuss the incentives of deviation from true valuation in detail in Appendix C along with a summary in our main paper (see line 212 - 221). **Collusion among participants and/or Platform disintermediation:** Thank you for bringing up the possibility of collusion and platform disintermediation within the protocol model. We acknowledge these drawbacks of protocol model, and will mention them in our revised *Conclusion and Future Work* section. However, going beyond the protocol model where the agents are taking part in a repeated game with unknown valuation is out of scope for this paper. In fact, to the best of our knowledge, this limitation is shared by almost all works on Bandit learning in Markets with stochastic rewards as these works also adopt a protocol model (see line 25 - line 29). --- Rebuttal Comment 1.1: Comment: Thank you for the response - based on this and other reviewers, my initial take on the setting was too harsh. I would like to see more general results but in current form it is of interest for the NeurIPS audience. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their thorough and constructive feedback through this process! We will add the discussion from this rebuttal including open problems/limitations of the current results and future directions into the revised version.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Density of States Prediction of Crystalline Materials via Prompt-guided Multi-Modal Transformer
Accept (poster)
Summary: This paper propose DOSTransformer, a transformer architecture for the task of DOS prediction, that takes energy levels as input instead of predicting a list of energies for different energy levels. The performances of this proposed DOSTransformer is better than MLP, GNN, and E3NN instances they implemented. Strengths: - The proposed method is the first one to take energy levels as input instead of directly predict a list of energies for different energy levels. - Writing: clear and easy to follow. Figures are informative. Weaknesses: - Novelty is limited, core ablation study is missing: the major novelty beyond previous works is taking energy levels as input instead of directly predict a list of energies for different energy levels. To do this, they proposed the multi-modal module that considers energy levels. But this point is not well supported by the performance ablation studies of E3NN. Also, the ablation study for their proposed model about this point (just directly predict a list of energies) is missing. - Missing comparisons with previous crystal neural networks. Previous crystal neural networks including CGCNN, MEGNET, Matformer, ALIGNN, M3GNet can all directly be used for this DOS task, by just changing the output dimensionality from 1 (single scalar value prediction) to M. Just ignoring all these methods is not convincing. Comparing with these methods (at least one or two current SOTA methods) can better demonstrate the importance of taking energy levels as conditions instead of predicting a list of energies. - Missing implementation details and what level of irreducible representations is used in E3NN. Simple E3NN with only irreducible representations of rotation order zero or 1 is not powerful enough. Using representations with rotation order of 2 will boost the performances significantly. Given the not significant performance gains beyond e3nn, further implementation details is needed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: As listed in above weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: They listed the limitations in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments on our work and for recognizing that our work is the first work that integrates energy level as input! We are more than willing to address each of the specific weaknesses and questions in a detailed manner. **W1 (Limited novelty & ablation studies).** **[Regarding the novelty of the paper]** This paper makes two contributions: firstly, as acknowledged by the reviewer, we adopt energy embeddings as the model input instead of directly predicting densities for different energy levels; secondly, we employ prompts to capture structural-specific interactions between crystalline materials and energy levels. As we noted in lines 267-270, the integration of energy embeddings in E3NN, the best baseline method, does not yield significant benefits. This could be attributed to the potential mismatch between the intended purpose of E3NN and the use of energy embeddings. That is, the inclusion of energy embeddings may interfere with the proper learning of material “equivariance” in E3NN, leading to limited advantages in its case. However, it is important to highlight that energy embeddings continue to deliver consistent performance improvements for other models, such as MLP and Graph Networks, showcasing the effectiveness of considering energy levels as input, and applicability in a broader context. **[Regarding the ablation study of the proposed method]** Upon the reviewer's request, we performed ablation studies to explore the benefit of point-wise prediction of energies as in our proposed method compared with directly predicting a list of energies. To do so, we made 2 ablation models as follows (The experimental results are presented in Table 2 in the attached [PDF](https://openreview.net/forum?id=2lWh1G1W1I&noteId=YWxXBLLMtv)): - Ablation 1: We removed the Transformer architecture and utilized only energy embeddings $\mathbf{E}^{0}$, crystal representation $\mathbf{g}_i$, and crystal system prompts $\mathbf{P}$ as input for an MLP to predict the list of densities across different energy levels. - Ablation 2: While keeping the transformer architecture, we obtained crystal representation by pooling energy representations $\mathbf{E}^{L_3, i}$, and then making a direct list-wise prediction with the pooled representation. We have observed a significant drop in the model performance when excluding the transformer layers (Ablation 1), which highlights the crucial role of these layers in enabling effective point-wise interactions between atomic and energy embeddings. Furthermore, the direct list-wise prediction through pooled representation (Ablation 2) yields inferior results compared to DOSTransformer. This underscores the significance of the Transformer layer and reinforces the need to maintain its point-wise prediction mechanism for the DOS prediction task. We sincerely appreciate your insightful feedback and will make sure to incorporate these supplementary experiments into the ablation study section. **W2 (Comparison with previous crystal neural networks).** We fully agree with the reviewer that comparing DOSTransformer to recent Crystal Neural Networks can further enhance the quality of the paper. Therefore, we conducted additional experiments on Matformer, ALIGNN, and CGCNN, by adjusting the output dimensions of these models (51 for Phonon DOS and 201 for Electron DOS), in Table 3 in the attached [PDF](https://openreview.net/forum?id=2lWh1G1W1I&noteId=YWxXBLLMtv). Although both ALIGNN and Matformer exhibited commendable performance, we observe that DOSTransformer consistently surpassed them by a significant margin. These observations again demonstrate the importance of considering energy levels for accurate DOS prediction. We will make sure to include the results in the revised paper, which would undoubtedly elevate the quality of our work. We sincerely thank the reviewer for your valuable insights. **W3 (Missing implementation details of E3NN).** First of all, we apologize for any confusion caused by not elaborating on the implementation details on E3NN. We utilize the most recent code of E3NN [1] provided by the authors in the GitHub repository, and this implementation uses representation with a rotation order of 1. On the other hand, upon the reviewer’s request, we conducted experiments with the E3NN using a representation with a rotation order of 2 in Table 4 of the attached [PDF](https://openreview.net/forum?id=2lWh1G1W1I&noteId=YWxXBLLMtv). In the experiment, we did not observe substantial performance improvement with a rotation order of 2. The limited expressiveness of its radial functions and spherical harmonics may have contributed to this result. Considering that DOS prediction is inherently more complex than other material properties due to its sequential nature, we conjecture that this inherent complexity might have hindered potential performance improvements. [1] Chen, Zhantao, et al. "Direct prediction of phonon density of states with Euclidean neural networks." Advanced Science 8.12 (2021): 2004214. --- Rebuttal Comment 1.1: Title: Responses to rebuttal Comment: Thank you for your effort. The additional ablation studies and more comparisons with recent crystal neural networks enhanced the clarity and addressed my concerns. Therefore, I adjusted my score accordingly. It seems we cannot adjust the original review, but I will adjust my score to 6 if editable. --- Reply to Comment 1.1.1: Title: Response by authors Comment: We thank the reviewer for acknowledging our effort, and for deciding to raise the score. We greatly appreciate it!
Summary: This paper proposes a prompt-based Transformer network for predicting the density of states of crystalline materials. The prompts are used to represent and control the energy and the additional structure information of the materials. Experiments show the proposed method can perform very well on two datasets under different settings. Strengths: - The whole system and method are motivated well and designed carefully. The model design considers the characteristics of both the task and the deep learning methods, e.g., prompt-based Transformer. - The experiments are carefully designed to show the benefits of the proposed methods. - The paper is well-written and easy to read. Weaknesses: - The datasets and experiments are a bit simple. The variations of the material type and structure type in the datasets are restricted. In this sense, although prompts are used to formulate the general variation of the energy and structure, the datasets contain very simple and limited variations. It is a bit limited in evaluating the potential of the proposed method and the techniques on more realistic datasets and applications. - For example, the prompt is used to represent the structure. There are only seven discrete options that will be given during testing. May the model handle the structure variation without relying on the input during testing? - In Table 4, the prompt based method is compared with the one-hot based input. Is the one-hot binary encoding directly used to replace the embedding of the prompt? If so, it might be unfair. How does the one-hot encoding work with a fully connected linear layer (i.e., feeding the one-hot encoding as the input to the FC layer), where the weights in the FC layer are corresponding to the embeddings for each type of the system? - In the worst case, the proposed prompt based method may work similarly to the linear embedding based approaches, on the simple datasets. - The authors may analyze the learned prompts to indicate whether the leaned embeddings can reflect and are consistent with the physical characteristics of the system types or structures indicated by the prompts, such as whether some similarity and relationship between these characteristics can be reflected in the learned prompts. - It seems that fine-tuning do not improves the performance significantly, especially given the identical MAE performance comparing Table 2 and Table 3. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Questions are left with the weakness points. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: The limitations are discussed and can reflect the main limitations of the proposed method – the proposed method cannot encode the properties of both materials into a single model. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1.** As the reviewer pointed out, using only 7 structural systems may make the dataset seem limited. However, the reason for having 7 structural systems is not a limitation of the dataset but rather a reflection of the knowledge used in crystallography when classifying the structures of materials [1, 2]. In other words, regardless of which dataset is used, there are only 7 crystal systems in the real-world, implying that we are bound to use the 7 crystal system prompts. Also, it is worth noting that the dataset used in the paper, i.e., the Materials Project database, is one of the most extensive public databases in the field of materials science. On the other hand, in real-world scenarios, it is widely acknowledged that databases based on DFT calculations have certain limitations in coverage. These databases often concentrate on particular material types or structural archetypes, leading to biased distributions, as discussed in lines 288-290. To address this concern, we performed experiments on out-of-distribution data in Section 5.3, where both the training and test datasets consisted of materials with different crystal systems. These experiments effectively showcase the adaptability of DOSTransformer to various dataset variations, successfully alleviating the reviewer's concerns. [1] PARK, Woon Bae, et al. Classification of crystal structure using a convolutional neural network. IUCrJ, 2017. [2] YU, Rong, et al. Calculations of single-crystal elastic constants made simple. Computer physics communications, 2010. **W2.** Yes, one-hot encoding is directly used to replace the embedding of prompts. As described in lines 207-210, crystal system information $P_k$ is concatenated with material embeddings $g_i$ and material-specific energy embedding $E_j^L$ and linearly transformed by the function $\phi_2$. Therefore, the main distinction between using Prompts and One-hot vectors lies in how the information about the crystal system is incorporated into the model, either in a soft or hard manner. When using a one-hot vector to represent the crystal system, the model integrates the crystal system information in a hard manner through matrix multiplication. In Table 4, we demonstrated that when incorporating crystal system information into “atom features”, the one-hot encoding of the system performs better than soft prompts. This is because soft prompts may hinder the learning of shared information among various crystalline materials due to their higher number of parameters. However, when incorporating the crystal system information “before the self-attention layers”, the model benefits from soft prompts. This allows the model to learn more flexible information that is advantageous for the system-specific integration of energies and crystalline materials. In conclusion, by incorporating soft prompts in appropriate locations, we were able to complete the optimal model. **W3.** To begin with, our objective in employing crystal prompts is not to comprehend the correlations among crystal systems, but instead to acquire crystal system-specific interactions between a crystalline material and energy levels, aiming to enrich the expert knowledge associated with each distinct crystal system like expert models. Furthermore, within the domain of crystallography, due to the unique and distinct characteristics of each crystal system in terms of symmetry, establishing a straightforward relationship between different crystal systems is not trivial [1]. However, as per the reviewer's suggestion, we conducted specific case studies by assessing the cosine similarity among the crystal system prompts based on Figure 2 in the attached [PDF](https://openreview.net/forum?id=2lWh1G1W1I&noteId=YWxXBLLMtv). Case 1) The correlation between the Cubic and Trigonal systems is approximately nine times stronger compared to the correlation between the Cubic and Triclinic systems. For example, consider the case of Bismuth (Bi). In ambient conditions, Bismuth exists in the Trigonal crystal system. However, when subjected to high-pressure conditions, it transforms into the (body-centered) Cubic crystal system. However, the Cubic and Triclinic systems exhibit the most significant dissimilarities in terms of symmetry, including axial lengths and angles, which makes crystal system transitions to be hard. Case 2) Furthermore, it is noticeable that the correlation between the triclinic and monoclinic systems is roughly ten times more pronounced than the correlation between the triclinic and cubic systems. Albite, a plagioclase feldspar mineral, exemplifies this scenario as its crystal symmetry can shift from triclinic to monoclinic contingent upon temperature changes. Nevertheless, due to the considerable discrepancy in symmetry between the triclinic and cubic systems compared to that between the triclinic and monoclinic systems, the likelihood of materials demonstrating both symmetries is quite low. To sum up, despite the complexity of establishing a direct correlation between crystal systems, our observations indicate that the alignment between prompt relationships and crystallography domain knowledge holds true. [1] Thomas, J.C., et al., A. Comparing crystal structures with symmetry and geometry. npj Comput Mater 7, 164 (2021). **W4.** As mentioned in lines 320-322, the performance improvement during fine-tuning is constrained due to the scarcity of data used in the process. This situation reflects real-world material sciences where existing databases relying on DFT calculations have limited coverage, often emphasizing specific material types or structural archetypes, resulting in a biased distribution. And indeed, we have conducted experiments on various training ratios for fine-tuning in Appendix E.4, revealing that as the volume of training data increases, there is a noticeable enhancement in performance during fine-tuning. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thanks for the authors' response. Although the discussions on the very empirical observations in response to 'W2' cannot fully convince me, I feel the response addresses most of my concerns. I increased the score to 6. --- Reply to Comment 1.1.1: Title: Response by Authors Comment: We sincerely appreciate the reviewer’s acknowledgement of our efforts to address the concerns and their decision to raise the score.
Summary: The paper proposes a new transformer-based method DOSTransformer for predicting density of states of crystalline materials. Different from previous methods, the energy level is additionally modeled as an input modality, i.e. the model takes in material configuration and energy level as input to predict DOS(material, energy). DOSTransformer uses cross-attention layers to relate the material and energy representations. Self-attention layers are also used to integrate information across different energy levels. Learnable prompts are introduced to provide information about the crystal system type, which helps capture structure-specific interactions between material and energy. Experiments on phonon and electron DOS datasets show DOSTransformer outperforms existing methods like graph networks and E3NN, in both in-distribution and out-of-distribution scenarios. Ablations and sensitivity analysis demonstrate the benefits of the different components like energy embeddings, global/system losses and crystal system prompts. Strengths: - Novel problem formulation: The authors provide a new perspective on DOS prediction by treating it as a multi-modal learning problem with material and energy as inputs. This proves to work better for DOS prediction compared to prior material-only models. - Neural net design: The multi-modal transformer architecture with cross-attention and prompt-guided self-attention is well-suited for the problem. It allows capturing complex relationships between material and energies. - Empirical results: The model is evaluated extensively on two datasets — Phonon DOS and Electron DOS — for both in-distribution and out-of-distribution scenarios. The physical validity of predictions is also analyzed through derived material properties. In particular, the introduced energy embeddings are shown to be useful in reducing prediction errors. - Ablation studies: The contributions of different components like the dual losses and crystal system prompts are quantitatively demonstrated through ablation studies. Weaknesses: - Simple prompts: The crystal system prompts are relatively simple vector embeddings. More sophisticated prompt learning methods could be explored to reason about structural information of different systems. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: N.A. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: N.A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments on our work and for acknowledging the novelty of our work in DOS prediction! We are more than willing to address each of the specific weaknesses and questions in a detailed manner. **W1 (Simple prompts).** As pointed out by the reviewer, our crystal system prompts are implemented using relatively straightforward methods. We fully agree with the reviewers that we can enhance crystal system prompts in various ways. One possible direction would be integrating them with large language models (LLMs). For instance, an approach could involve initializing the system prompts using the embeddings derived from the descriptions of each crystal system. We appreciate the valuable feedback that sheds light on potential paths for future research! --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. I have also read other reviews. I agree with reviewer 9USD that it is worth adding more discussion and literature about using discretization at energy levels. --- Reply to Comment 1.1.1: Comment: We greatly appreciate your effort in reviewing our responses and consistently supporting our research paper! As we respond to reviewer 9USD, we will make sure to incorporate the discussion and literature about discretization of energy levels.
Summary: This work proposes a transformer architecture for predicting density of states of crystalline materials for different energy levels. The distribution of states is spectral property that is approximated as a function of both the material and energy levels. The architectures proposed consists of a multi-modal transformer with self and cross attention layers, and a GNN for embedding. Experiments compare the proposed architecture with state-of-the-art E3NNs, demonstrating a decrease in MSE. Strengths: The main strengths of this work include: 1. The approach considers the crystalline material, energy levels, and structural properties. 2. The problem and methods are well-defined, and the architectures are implemented using state-of-the-art libraries. 3. The results include out-of-distribution experiments, ablation studies, and sensitivity analysis. Weaknesses: The main weaknesses of this work are that: 1. It needs to be clarified what MSE's are required for the approach to be useful in the real world. For example, in proteomics, and RMSD of less than 2-3Angstrom is considered valid compared to experimental results. What would be the MSE's required to achieve such a validation in this domain. 2. The terminology and definition used for fine-tuning and especially prompting in this work differs significantly from the standard definition and should be clarified. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What are the MSE values of bulk, band, and ferm. are considered useful for real-world applications? Are there 38,889 crystalline materials after or before filtering for non-magnetism? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments on our work and for acknowledging the efforts in experiments, including out-of-distribution scenarios! We are more than willing to address each of the specific weaknesses and questions in a detailed manner. **W1, Q1 (Appropriate level of MSE).** To the best of our knowledge, in the literature of DOS prediction, deriving various physical properties based on the model-predicted DOS is a novel endeavor, making it challenging to determine a definitive threshold for the MSE value that is practically applicable in the real world. For band gap prediction, a commonly accepted MSE value is around 0.3 ~ 0.4 [1]. However, we would like to emphasize that a direct comparison of our results with those presented in [1] is not appropriate considering our experimental setup. - **Experimental setup:** As described in Line 277-282, we used the DFT-calculated DOS as the ground-truth DOS, and trained a simple 2-layer MLP with non-linearity to predict the properties of a crystal structure. Then, we use the learned MLP weights to simply predict various physical properties given the model-predicted DOS as input. It is important to note that the learned MLP weights themselves are not perfect, meaning that the properties derived from the model-predicted DOS using the learned MLP weights inevitably exhibit error that is accumulated due to the error in the learned MLP weights. For this reason, rather than directly comparing our results with the results reported in [1], our intention was to compare among our baselines, and see whether the various properties predicted by DOS predicted by our proposed method are relatively more accurate. Thus, in the current context, the MSE value itself might hold less significance, and it is more appropriate to consider it as an evaluation metric to compare among baselines. However, we could reduce the accumulated error by training a more advanced neural network instead of a simple MLP. We appreciate the valuable feedback that sheds light on potential paths for future research! [1] LEE, Joohwi, et al. Prediction model of band gap for inorganic compounds by combination of density functional theory calculations and machine learning techniques. Physical Review B, 2016, 93.11: 115104. **W2 (Terminology of fine-tuning and prompt-tuning).** Although we made efforts to clarify the concepts of fine-tuning and prompt tuning in Section 2.2, we apologize for any confusion that may have arisen. To provide further clarification, we briefly outline the distinctions between fine-tuning, discrete prompt designing, and continuous prompt tuning. In the field of Natural Language Processing (NLP), fine-tuning was introduced to adapt pre-trained language models (LM) for specific downstream tasks. Conversely, discrete prompt designing aims to reformulate downstream tasks to resemble those encountered during the original LM training, achieved by using a textual prompt. Recently, continuous prompt tuning has been proposed, which involves prompting directly in the embedding space of the model. These prompts have their own parameters, which can be adjusted based on training data from the downstream task. [1] In this paper, we adopt the concept of continuous prompt tuning to guide the model in understanding the structural types of crystalline materials. While a single prompt was employed for each downstream task in NLP, our approach utilizes a single prompt for each of the seven structural types of materials discussed in the paper. It is important to note that continuous prompts are utilized due to the non-trivial nature of modeling the structural type of crystalline material using human-engineered prompts. Additionally, concerning the use of "fine-tuning" in Section 5.3 and Table 3, we intended it to refer to tuning on various downstream tasks. In Table 3, "All" denotes the model that updates all parameters for the downstream task, whereas "Only Prompt" refers to the model that exclusively updates the parameters of continuous prompts, which is the same as "Continuous prompt tuning." We will thoroughly revise the manuscript to ensure clarity on these matters. [1] LIU, Pengfei, et al. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys, 2023, 55.9: 1-35. **Q2 (Non-magnetism?).** Yes, 38,889 crystalline materials are non-magnetic materials obtained after filtering out magnetic materials. This choice was influenced by the well-known issue of calculation accuracy in the DFT-generated DOS of magnetic materials, which is considered unreliable [1, 2]. In short, we only used non-magnetic materials for training our model, since using magnetism materials for training may interfere with the model's ability to learn the DOS of non-magnetic materials. [1] Zeller, R. (2006). Spin-polarized dft calculations and magnetism. Computational Nanoscience: Do It Yourself, 31, 419-445. [2] Poblet, J. M., López, X., & Bo, C. (2003). Ab initio and DFT modelling of complex materials: towards the understanding of electronic and magnetic properties of polyoxometalates. Chemical Society Reviews, 32(5), 297-308.
Rebuttal 1: Rebuttal: Dear reviewers, thank you for your valuable comments on our work. We are more than willing to address each of the weaknesses and questions in detail. We also attach a PDF file that contains Figures and Tables for rebuttal. Pdf: /pdf/d0b58d6c0e474caed75bff6d8bb744b86b572d54.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work proposes a transformer model to predict the density of states of given materials. The model takes in all energy levels that are desired for the Density of States calculation as a 'prompt' and information about the materials structure, the output is a prediction of the DOS at the given energy levels. Strengths: + The DOSTransformer is evaluated on both in-domain structures and out of domain structures against relevant baseline models and previously reported models. + The DOSTransformer model design makes use of domain specific knowledge, such as the crystal structure type and the energy levels. + Authors demonstrate that the task of predicting DOS is heavily added by providing energy levels for targeted prediction via relevant ablation studies + The model is clearly described. I was able to understand all the components of the proposed model. Weaknesses: - More could be done to describe the task of predicting DOS for downstream applications, especially for a non-materials science audience. How are the DOS outputs for predicting bandgap energy / electrical conductivity? How many grid points are needed to gain an 'accurate enough' prediction of these properties? What is kind of error in the DOS prediction is reasonable? Are there certain energy levels where it is more critical to predict the DOS correctly? - I would appreciate some more statistics of the datasets used (PhononDOS and ElectronDOS) in the main text (e.g. # atom types, # crystal types). Technical Quality: 3 good Clarity: 3 good Questions for Authors: - As I understand it one of the tasks in this work is to predict the output of a function (Density of States) on a fixed grid of energy levels. Are there other examples in the literature where transformers are used to predict the outputs of a mathematical function, and the fixed grid of inputs is provided as a prompt for the model? If there are, it would be great to learn about them in the background section. - Because the DOS is predicted over a finite grid, I am curious about the smoothness of the DOSTransformer model between finite grid points. If the DOSTransformer model was to be evaluated at energy levels between the provided finite points in the prompt, are the predictions smooth where you'd expect the function to be smooth? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - More description should be provided about the datasets used. How many different elements/ types of materials were included int he PhononDOS and ElectronDOS provided? Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments on our work. We are more than willing to address each of the specific weaknesses and questions in a detailed manner. **W1 (Downstream tasks of DOS prediction).** - How are the DOS outputs for predicting bandgap energy / electrical conductivity? DOS serves as a fundamental representation of the electronic density in an atomic system across various energy states. As the electronic density directly influences the electric charge and physical energy of the atoms, DOS provides valuable insights into material properties, including band gap and electrical conductivity. For instance, band gap, a crucial factor in determining the material's chemical applications, can be derived by examining the highest and lowest electronic densities in the DOS. - How many grid points are needed to gain an 'accurate enough' prediction of these properties? The electron distributions vary for each material, and the number of required grid points depends on these differences. Due to the variations, we adopted the DFT configurations from the Materials Project database, one of the most extensive public databases in the field of materials science. - What kind of error in the DOS prediction is reasonable? We mainly compare DOSTransformer to baselines in terms of MSE and MAE as have done in previous works [1]. In addition to the metrics, we further introduced R2, which is crucial in evaluating the regression models, by indicating the proportion of the variance in the dependent variable (i.e., DOS) that is predictable from independent variables (i.e., crystalline materials and energies). Moreover, in terms of physics, the density peaks within the DOS carry significance beyond the entirety of the density sequence across energy states. In other words, it's important to predict the energy level at which the electron density is the highest. To address this, we introduce the absolute error of peak positions as an extra measure to evaluate the model's prediction accuracy. You can find this in Table 1 in the attached [PDF](https://openreview.net/forum?id=2lWh1G1W1I&noteId=YWxXBLLMtv) . In the table, we observe that DOSTransformer provides more precise predictions for the peak points in the DOS, showcasing the practical value of the predicted DOS. [1] CHEN, Zhantao, et al. Direct prediction of phonon density of states with Euclidean neural networks. Advanced Science, 2021, 8.12: 2004214. - Are there certain energy levels where it is more critical to predict the DOS correctly? Important energy levels are different for each material. However, the energy levels of the highest electronic densities are important because they determine many energy-related properties of the materials. For this reason, we evaluated how accurately the predicted peak points in the DOS match the actual peak points in the real DOS by comparing them in terms of MSE in Table 1 in the attached [PDF](https://openreview.net/forum?id=2lWh1G1W1I&noteId=YWxXBLLMtv) . **W2 (Dataset statistics).** In addition to Electron DOS data statistics provided in Appendix A.3., we provide more details in the attached [PDF](https://openreview.net/forum?id=2lWh1G1W1I&noteId=YWxXBLLMtv) . **Q1 (Literature survey).** Firstly, we would like to provide clear definitions of the terminology used in this paper. We refer to the embeddings for each fixed grid of energy as "energy embeddings," and the prompts for each crystal system as "crystal system prompts." When conceptualizing each energy value as the position of a word in a sentence, energy embeddings can be analogous to the positional encoding used in traditional transformers. In the original proposal of the Transformer architecture, a sinusoidal function was suggested as a method for positional encoding. However, the sinusoidal function may have limitations in terms of learnability and flexibility, which can affect its effectiveness [1]. To address this issue, most pre-trained language models [2, 3] employ learnable vector embeddings as positional representations. Additionally, [1] enhances positional encoding by constructing a learnable fully-connected feed-forward sinusoidal positional encoding network. We apologize for any confusion that may have arisen as a result of the missing background section. To address this, we will make comprehensive revisions to the section by incorporating the background about positional features. [1] Guoxin Wang, et al. A Simple yet Effective Learnable Positional Encoding Method for Improving Document Transformer Model. ACL Findings 2022. [2] DEVLIN, Jacob, et al. Bert: Pre-training of deep bidirectional transformers for language understanding. ACL 2019. [3] LIU, Yinhan, et al. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019. **Q2 (Smoothness of DOS prediction).** As pointed out by the reviewer, we transformed the continuous energy values into finite grids of energy levels through binning. This discretization process prevents the evaluation of DOS prediction models, including DOSTransformer, at energy levels between the provided finite points. However, for specific ranges of energies with desired resolutions, we have the flexibility to train the model with the newly processed data. As a result, we believe that DOSTransformer can be applied universally to accommodate different energy levels and resolutions according to specific research needs. On the other hand, a promising avenue for DOS prediction, considering the continuous nature of energies and DOS, could be the incorporation of neural ordinary differential equations (Neural ODEs) [1]. Neural ODEs have shown effectiveness in capturing the continuous nature of sequential data, making them a potential candidate for this task. We are grateful for the insightful feedback, which illuminates future research directions! [1] Chen, Ricky TQ, et al. "Neural ordinary differential equations." NeurIPS 2018 --- Rebuttal Comment 1.1: Comment: I have read the reviewer's responses. Thank you to the reviewers for their careful responses and improvements to the manuscript!
null
null
null
null
null
null
Memory-Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer Quantization
Accept (poster)
Summary: This paper presents a method PEQA that combines quantization and parameter-efficient fine-tuning. It takes both advantages, including updating only a tiny fraction of model weights and saving the memory by quantization. The model weights will be decomposed into a matrix of low-bit integers and a scalar vector. During the fine-tuning, only the scalar vector will be updated. The paper has done comprehensive experiments to measure its effectiveness. The proposed method outperforms PTQ and has an acceptable degradation compared to LoRA, but which significantly saved the memory. Strengths: 1. The paper is well-written and easy-to-follow. 2. The idea and the method are clean. 3. The evaluation is comprehensive. Weaknesses: 1. The degradation of the accuracy of the proposed method compared to LoRA is non-negligible. 2. The paper combines two existing techniques rather than inventing a new technique. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Table 2: Why does PEQA have an even lower perplexity in some cases than QAT? 2. How does PEQA compare to the direct combination of LoRA and quantization (e.x. compress to 4-bit, then LoRA on the 4-bit matrices)? 3. What is the intuition behind the decomposition used in PEQA? 4. Table 4: Why PEQA degrades the performance of Five-Shot? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: 1. The degradation of the accuracy of the proposed method compared to LoRA is non-negligible. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: [**Weakness1**] As the reviewer pointed out, when comparing PEQA with LoRA in terms of the number of trainable parameters (e.g., LLaMA-13B+LoRA vs. LLaMA-13B+PEQA), the degradation of the performance of PEQA relative to LoRA seems to be non-negligible. However, when comparing PEQA with LoRA in terms of the model size (e.g., LLaMA-13B+LoRA vs. LLaMA-30B+PEQA) as shown in Figure 2 (b), PEQA outperforms LoRA in all cases. We believe that it is more fair to compare PEQA with LoRA in terms of the model size than the number of learnable parameters due to the fact that the checkpoint size for trainable parameters is negligible relative to the model size, which makes the overall memory required for deployment totally dominated by the model size. [**Weakness2**] While the comment is ambiguous as to which two techniques the reviewer thinks our method is based on, we would ike to reiterate that PEQA is a straightforward yet effective way to achieve PEFT and quantization within the same framework. .PEQA inherits (1) the reduction in the number of trainable parameters and task-switching capability from PEFT and (2) the reduction in the model size and the acceleration of text generation inference from quantization, which makes PEQA more practical and effective than both techniques. Furthermore,, the theoretics and the underlying mechanism that allow PEQA to ourperform previous SOTA is non-trivial, and we intend to address them in the camera-ready version, if possible. [**Question1**] In Table 2, as highlighted by the reviewer, PEQA exhibits lower perplexity in certain scenarios compared to QAT. This phenomenon is likely influenced by the straight-through estimator (STE) utilized in the QAT approach we presented. The QAT method in Table 2 is a rudimentary one that simply rounds weights as per Eq. 3 and updates $W_0$ and $s_0$ using the STE. Notably, the use of STE can sometimes lead to compromised accuracy or increased perplexity due to its approximated gradients possibly failing to converge appropriately. The naive QAT approach we used can experience performance degradation compared to its full-precision counterpart. Our primary intention in comparing PEQA with this basic QAT in Table 2 was to emphasize the rationale for keeping the integer matrix static, rather than to suggest PEQA's superiority over all QAT methods. We will ensure to clarify this perspective in our revisions to avoid any misconceptions. [**Question2**] The direct combination of LoRA and quantization, such as in the QLoRA approach, is indeed a notable method. As the reviewer highlighted, LoRA can be directly applied to quantized pre-trained models like in the QLoRA technique. However, there are some distinctions and nuances worth mentioning. While QLoRA can indeed achieve accuracy levels comparable to LoRA as detailed in [1], there's an important caveat: it doesn't benefit from the inference acceleration. This is because the weights can't be represented in a quantized format when merging quantized pre-trained weights with LoRA for deployment. Furthermore, if there's a scenario where you wish to fine-tune a deployed quantized PLM for some reason, our methodology might also present itself as an optimal choice. Specifically, if the primary goal is memory-efficient fine-tuning and rapid task-switching of the Quantized PLM, QLoRA could be a viable approach. However, if one's aim is to maintain the quantized form of the PLM and deploy services via memory-efficient fine-tuning with a focus on latency during deployment, PEQA emerges as a potential best-fit. [**Question3**] The intuition behind the decomposition used in PEQA stems from a desire to inherit from the advantages of both PEFT and quantization. As quantized parameters are expressed as the Hadamard product of full-precision scales and an integer matrix, we freeze the integer matrix while allowing the full-precision scales to be adaptable to a downstream task. [**Question4**] As highlighted by the reviewer, Table 4 reveals that the five-shot performance of LLaMA-7B + PEQA is inferior to that of LLaMA-7B alone. Conversely, for LLaMA-13B and LLaMA-30B, PEQA yields an improvement in the five-shot performance. In view of these findings, the five-shot performance drop of LLaMA-7B + PEQA can be regarded as a temporary variation, and it would be therefore premature to conclude that PEQA diminishes performance in the five-shot setting. --- Rebuttal Comment 1.1: Comment: Thanks for the clarifications. I have no more question, and would like to keep the original assessment. --- Reply to Comment 1.1.1: Comment: We are pleased that we were able to address your questions. We will certainly incorporate your valuable comments into the revised manuscript.
Summary: This paper introduces a new framework, PEQA, for efficiently tuning Large Language Models (LLMs). PEQA uniquely fine-tunes the scale parameters during the instructing tuning phase while keeping the backbone parameters frozen. This approach allows PEQA models to maintain advantages during both the training and inference phases. The primary experiments are performed on the Alpaca datasets. The paper also compares PEQA's performance on various datasets with other efficient tuning methods, such as Lora. Strengths: The paper is strongly motivated, emphasizing the importance of memory efficiency in tuning methods. The authors highlight the role of quantization methods in reducing memory usage beyond what is achievable with parameter efficiency methods alone [1]. The paper is well-written and easy to follow. [1]. https://github.com/huggingface/peft Weaknesses: Table 1 could be improved by emphasizing that other parameter-efficient tuning methods can also apply post-training quantization methods, such as GPTQ and AWQ. The baseline should include the post-training quantization methods applied before or after fine-tuning (e.g., Lora and Adapter), and report on the memory usage during inference and training. This is particularly relevant as the HuggingFace PEFT library has incorporated Int8 quantization. Reference: [1] QLoRA: Efficient Fine-tuning of Quantized LLMs Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See Weaknesses Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: See Weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: [**Weakness1**] We appreciate the reviewer's insightful suggestion. In response, we have updated Table 1 to include both PEFT+PTQ and PTQ+PEFT as per your recommendation. We understand the importance of showcasing that other parameter-efficient tuning methods can also apply post-training quantization methods like GPTQ and AWQ. This modification will be incorporated in our revised manuscript. Thank you for your valuable feedback. | | Fine-Tuning ||| Deployment || |:----:|:----:|:----:|:----:|:----:|:----:| | Method | DRAM | Tuning Speed | DRAM | Inference Speed | Task-Switching | | Full Fine-Tuning | $457$GB | Slow | $131$GB | Slow | Slow | | PEFT | $131$GB | Fast | $131$GB | Slow | Fast | | PEFT+PTQ | $131$GB | Fast | $33$GB | Fast | Slow | | PTQ+PEFT | $33$GB | Fast | $33$GB | Slow | Fast | | $\textbf{Quantization-Aware PEFT (ours)}$ | $\textbf{33GB}$ | $\textbf{Fast}$ | $\textbf{33GB}$ | $\textbf{Fast}$ | $\textbf{Fast}$ | [**Weakness2**] Post-training quantization (PTQ) after fine-tuning has been addressed through LoRA + OPTQ in our work. As for PTQ applied before fine-tuning, it could indeed be conducted as you suggested with methodologies such as QLoRA. As QLoRA is a concurrent work with PEQA, an apple-to-apple comparison between QLoRA and PEQA has not yet been completed due to time constraint of the rebuttal period. However, as can be seen from the updated Table 1, approaches like QLoRA do not ㅈaccelerate inference in the deployment phase. QLoRA could indeed be a good method if the requirements solely focus on memory-efficient fine-tuning and quick task-switching of the Quantized PLM. However, if maintaining the quantized PLM form and deploying services through memory-efficient fine-tuning are required, with a particular emphasis on latency during deployment, we suggest that PEQA may be the most suitable choice. As for the HuggingFace PEFT library incorporating Int8 quantization, we will study and consider this implementation for our future work and aim to compare it to our method for a more comprehensive evaluation. Thank you for your valuable suggestions. --- Rebuttal Comment 1.1: Comment: Dear reviewer, We genuinely value your feedback and are always open to further discussions. We also would like to highlight the additional implications of memory usage during the fine-tuning process in Table D of uploaded PDF. The memory consumption is not solely dictated by the model size but is also influenced by various other factors[1]. Our approach with PEQA inherently offers memory advantages during fine-tuning by striving to minimize both the model size and the number of training parameters. To provide a clear understanding of these benefits, we conducted tests using a single NVIDIA A100-80GB GPU and the causal language modeling code from the HuggingFace repository[2]. Both LoRA and PEQA fine-tuned the LLaMA-7B on the Wikitext2 dataset with a batch size of 2 without gradient accumulation. Our findings indicated that while LoRA peaked at a memory usage of 59GB during optimization, PEQA used just 43GB. Remarkably, this disparity (16GB, 7B) escalates as the model size increases; for instance, a 65B full-precision model under LoRA occupies 130GB, whereas PEQA remarkably uses just 33GB. Additionally, LoRA encountered Out-Of-Memory (OOM) issues at a batch size of 4, whereas PEQA, due to its efficiency, continued training seamlessly. Should you have any inquiries or require clarifications about our rebuttal, please don't hesitate to reach out. We are eager to address any concerns and elucidate potential ambiguities in greater depth. [1] https://huggingface.co/docs/transformers/perf_train_gpu_one#anatomy-of-models-memory. [2] https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm_no_trainer.py
Summary: The paper presents a novel approach named Parameter-Efficient and Quantization-aware Adaptation (PEQA) to address the challenges of efficiently fine-tuning and deploying large language models (LLMs). The paper demonstrates PEQA's effectiveness and scalability through extensive experiments, comparing it with competitive baselines across a range of tasks from natural language understanding to generation benchmarks. Strengths: * The paper centers on the topic of efficient model compression and fine-tuning, framing its approach through a unified method referred to as PEQA. * The authors' proposed method offers benefits, most noticeably in substantially reducing the memory usage associated with optimizer state saving and the final model size. * The authors have undertaken a comprehensive evaluation setup which encompasses several benchmark tasks and a variety of model sizes. Weaknesses: * The overall presentation of the paper could be improved for clarity. Specifically, the meaning of statements such as "Notice that s0 and z0 have nothing to do with a choice of downstream task" on Line 148, and the term "integer quantization indices" on Line 149, are not clear. * The calculation of memory usage during fine-tuning is not sufficiently explained. While it's understood that the proposed method lowers optimizer state storage by only updating scaling factors, the paper does not address memory usage during gradient calculation, a crucial aspect that often dictates peak memory usage during optimization. * It's unclear why the choice of only updating scaling factors is advantageous. The paper only compares the proposed method with LoRA, limiting the assessment of its effectiveness. For instance, the BitFit method (Ben Zaken et al., 2022) demonstrates that fine-tuning the LLM can be achieved by only updating biases. Moreover, the paper does not discuss the potential outcomes of optimizing zero-points only or optimizing both zero-points and the scaling factor. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: * The methodology section of the paper leaves some key details unaddressed. Specifically, it's unclear how the gradient of the scaling factors is computed in the proposed model. The paper PACT by Choi, Jungwook, et al. [1] presents two different ways to compute the gradient for the scaling factor/clamp_range for the quantization function. It would be beneficial for the authors to clarify which method was adopted in their work. [1] Choi, Jungwook, et al. \Pact: Parameterized clipping activation for quantized neural networks.\ arXiv preprint arXiv:1805.06085 (2018). Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: see above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: [**Weakness1**] As s0 and z0 are quantization parameters for pre-trained weights W0, our intention was to imply that both s0 and z0 are not related to any downstream task at all. The term “integer quantization index” is also used in [1], which means the rounding of W0/s0, so we refer to $\overline W_0$ as the integer quantization indices of W0. To enhance clarity, we will revise these expressions in our manuscript. [1] https://en.wikipedia.org/wiki/Quantization_(signal_processing) [**Weakness2**] We appreciate the reviewer's insightful comment regarding the calculation of memory usage during optimization. To clarify, the computation of the gradient for the quantization scale in PEQA necessitates dequantizing the quantized weight to a 16-bit half-precision format. Despite this process, it should be noted that weights from other layers retain their low-bit quantized state. This results in a lesser peak memory usage during optimization for PEQA compared to LoRA. Our proposed model achieves memory efficiency not only during the inference phase but also throughout the optimization process, by minimizing memory consumption related to the model size. We acknowledge the importance of detailing this aspect in our work and will ensure to elaborate on this point in our revision. Thank you for your valuable feedback. We also kindly point out that memory usage during fine-tuning is influenced not only by the model size but also by other factors[2]. Hence, in our endeavor to reduce both the model size and the number of training parameters, we adopted PEQA, which provides memory benefits during fine-tuning. To quantify the extent of these memory benefits in actual training scenarios, we measured the memory peaks. Our experimental setup utilized a single NVIDIA A100-80GB GPU and employed the causal language modeling code from the HuggingFace repository [3]. Both LoRA and PEQA fine-tuned the LLaMA-7B on Wikitext2 dataset with a batch size of 2 without gradient accumulation. The results of our tests revealed that LoRA displayed a peak memory usage of 59GB during optimization, whereas PEQA consumed only 43GB. Furthermore, while LoRA experienced Out-Of-Memory (OOM) issues with a batch size of 4, PEQA, owing to its reduced memory usage, was able to proceed with training at this batch size. Accepting the feedback of the reviewer, we will incorporate these findings into our revised manuscript to facilitate a clearer understanding for our readers. Thank you for your valuable feedback. [2] https://huggingface.co/docs/transformers/perf_train_gpu_one#anatomy-of-models-memory. [3] https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm_no_trainer.py [**Weakness3**] Uniform quantization can represent both asymmetric and symmetric quantizations, hence it's not always necessary to mandate the use of a zero-point. This is why adopting a strategy of only learning the scale factor serves as a fundamental and scalable baseline. We opted for this approach to clearly establish its advantages. To determine the efficacy of learning only the scaling factors, we've incorporated additional experiments. By referring to the added table, it's evident that merely optimizing zero-points does not yield effective learning outcomes. Moreover, simultaneously optimizing both zero-points and scaling factors doesn't present any significant improvement in accuracy either. While parameter-efficient adaptation methods, such as BitFit, offer a memory-efficient fine-tuning and preserving quick task-switching abilities, they do not outperform LoRA in terms of accuracy[4]. Our proposed method, PEQA, which updates the uniform quantization scale, was presented as the most straightforward and effective approach that is not only memory-efficient during fine-tuning but also facilitates accelerated inference during deployment, maintains quick task-switching abilities, and delivers acceptable accuracy. We acknowledge this feedback and ensure that the revised manuscript reflects these points accordingly. Given the time constraints of the rebuttal period, our revised manuscript will incorporate additional BitFit experiments to position PEQA alongside other methods as recommended by the reviewers. | | Zero-point only | Scale-factor only (PEQA) | Both scale-factor and zero-point | |:----:|:----:|:----:|:----:| | 7B | $11.56$ | $5.84$ | $5.86$ | | 13B | $9.83$ | $5.30$ | $5.34$ |. [4] Hu, Edward J., et al. "Lora: Low-rank adaptation of large language models." The Tenth International Conference on Learning Representations (ICLR), Virtual Event, April 25-29, 2022. [**Question1** ] Let the quantization scale $\boldsymbol{s}\_0 = \begin{bmatrix} s_{0, (1)} \newline s_{0, (2)} \newline \vdots \newline s_{0, (n)} \end{bmatrix}$, the integer matrix $$ \overline{\boldsymbol{W}}\_0 = \begin{bmatrix} \overline{W}\_{(1, 1)} , \overline{W}\_{(1, 2)} , \cdots , \overline{W}\_{(1, m)} \newline \vdots \newline \overline{W}\_{(n, 1)} , \overline{W}\_{(n, 2)} , \cdots , \overline{W}\_{(n, m)} \end{bmatrix},$$ and the quantized weight matrix $$ \widehat{\boldsymbol{W}} = \begin{bmatrix} \widehat{W}\_{(1, 1)} , \widehat{W}\_{(1, 2)} , \cdots , \widehat{W}\_{(1, m)} \newline \vdots \newline \widehat{W}\_{(n, 1)} , \widehat{W}\_{(n, 2)} , \cdots , \widehat{W}\_{(n, m)} \end{bmatrix}.$$ Then, the gradient of $\mathcal{L}$ with respect to the scaling factor $\boldsymbol{s}\_0$ is given by ${\partial \mathcal{L} \over \partial \boldsymbol{s}\_0} = \begin{bmatrix} {\partial \mathcal{L} \over \partial s_{0, (1)}} \newline {\partial \mathcal{L} \over \partial s_{0, (2)}} \newline \vdots \newline {\partial \mathcal{L} \over \partial s_{0, (n)}} \end{bmatrix}$ where ${\partial \mathcal{L} \over \partial s_{0, (i)}} = \Sigma_{j=1}^m \overline{W}\_{(i, j)} {\partial \mathcal{L} \over \partial \widehat{W}\_{(i, j)}}$ for $1 \le i \le n$. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: Thanks for the response. My questions are well addressed. I raise my score accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback. I'm pleased to hear that your concerns have been addressed. We will make sure to incorporate your valuable suggestions into the revised manuscript.
Summary: This paper introduces a model compression method called Parameter-Efficient Quantization-Aware Adaptation (PEQA) to tackle the size challenge of Large Language Models (LLMs) while enhancing their task-specific fine-tuning. The PEQA technique quantizes the fully-connected layers into quantization scales and integer values for initialization, and during downstream tasks, only the quantization scales are fine-tuned, leaving the integer matrix fixed. This approach significantly reduces the parameters required for gradient updates and the memory for loading pre-trained model weights, making fine-tuning more memory-efficient than other techniques such as Post Training Quantization (PTQ) and Quantization-Aware Training (QAT). However, there are several concerns regarding novelty and effectiveness of the proposed methods. Strengths: - The paper provides an easily understandable explanation of the existing research trends and exhibits good readability. - By quantizing the pre-trained weights to sub-4bits, the memory cost for LLM fine-tuning has been significantly reduced. This enables a wider range of individuals to fine-tune LLMs according to their desired tasks efficiently. In particular, the existing similar approach, PEFT + PTQ technique, has been studied for models of size smaller than 1.3B. However, PEQA proves to be effective even for models as large as 65B and reduces memory costs. - Through various experimental results, the paper demonstrates the performance of the PEQA approach in fine-tuning and its application of quantization compared with PTQ and QAT results. This effectively shows how well the method works for the purpose of LLM fine-tuning. Weaknesses: - This paper's significant contribution, fine-tuning only the quantization scales (termed PEQA), seems similar to AlphaTuning [22]. The paper does not thoroughly discuss the limitations of AlphaTuning, only noting its lack of evaluation on large LLMs. - The baseline settings in the evaluation setup are unusual. For instance, the authors claim PEQA matches or exceeds QAT's performance. However, QAT's accuracy appears lower than state-of-the-art methodologies: e.g., LLM-QAT[a] demonstrated that 4-bit weight quantization almost reproduces the full-precision accuracy, but QAT in PEQA (Table2, Table3) shows noticeable degradation (LoRA 10.63 vs QAT 11.07). Such understated baselines could potentially mislead readers regarding PEQA's true performance. [a] LLM-QAT: Data-Free Quantization Aware Training for Large Language Models - The authors state that PEQA maintains generalization capabilities, yet the results are inconsistent. Table 4 indicates PEQA's zero-shot accuracy matches LoRA, but this improvement disappears in five-shot accuracy. Therefore, it is not clear if PEQA really preserves generalization capability or not. - There is little understanding of why fine-tuning only the quantization scales is sufficient. Therefore, it is hard to be convinced that the proposed method would work as expected. For example, Table 5 shows that PEQA even noticeably outperforms LoRA (without quantization) LoRA for zero-shot MMLU (and even LoRA+OPTQ outperforms LoRA for LaMMA-7B), which counters the accuracy trends of "LoRA > PEQA > LoRA+OPTQ" consistently shown in all the other tables. It would be desirable to provide insights into such unexpected evaluation results. - The problem setup in Section 3.1 doesn't align with PEQA's main objectives. The memory-bound issue during text generation inference discussed in this section seems less relevant to PEQA's fine-tuning memory efficiency benefits. Since forward computation isn't performed in the generation style during training, and text generation's memory-bound issue can be resolved by quantization (e.g., LoRA + OPTQ), including this issue in the problem setup may be misleading. - Comparing 3-bit weight quantization with OPTQ is valuable, but considering OPTQ's known limitations at this level of granularity, a fine-grained quantization comparison might be more convincing. Furthermore, a comparison with recent state-of-the-art weight quantization methodologies, such as LoRA + AWQ, could provide a clearer understanding of PEQA's effectiveness. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What could be the reason for PEQA outperforming FP fine-tuning with LoRA (+LoRA) as shown in Table 5? Additionally, the performance of applying OPTQ to LoRA actually increases in LLaMA-7B. Could this be interpreted as the LoRA Fine-Tuning approach not being optimally applied in this experiment? - From the Five-Shot results in Table 4, we can observe that the performance of PEQA on LLaMA-7B actually decreases, or its fine-tuning effect is insufficient compared to LoRA. Could this result be interpreted as the In-Context Learning ability of LLM not being well-preserved in PEQA Fine-Tuning? - How does the number of learnable parameters or model size change with a decrease in group size in Table 8? Only perplexity is provided, making it difficult to observe the trade-off together. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors did not explain the limitations of this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: [**Weakness1**] When diving deeper into the quantization scale specifics which are learnable parameters of both methods, it's worth noting that PEQA's adherence to uniform quantization means there's only one shared quantization scale for integer weight. Conversely, AlphaTuning's non-uniform approach means that for a $b$-bit quantization, there are $b$ individual quantization scales for each weight matrix. Despite having multiple scales, AlphaTuning only fine-tunes one, leaving the rest static. As such, the number of trainable parameters are identical and AlphaTuning seems to offer a larger potential for a well-fitted model, but it can be easily seen that $b-1$ rest static scales introduced in AlphaTuning have limited utilizability, and thus the method may be prone to overfitting as evident through empirical results. In our appendices, we've showcased a direct performance comparison between the two using a 1.3B model. PEQA, drawing from its methodological advantages, consistently demonstrates a performance that's superior by at least $0.7$ ppl. We also intend to include more detailed studies to address the fundamental differences between previous work and ours. [**Weakness2**] Although the performance of PEQA can be similar to or better than that of QAT in Table 2, the QAT approach presented in Table 2 is one of the simplest QAT methods that just rounds weights like Eq. 3 and updates $W_0$ and $s_0$ by using the straight-through estimator. As a result, unlike LLM-QAT, the state-of-the-art QAT technique for LLMs (that has been introduced after the NIPs submission deadline), such a rudimentary QAT method in Table 2 can cause performance degradation compared to its full-precision counterpart. As the reviewer mentioned, such a naive QAT approach seems to be an underestimated baseline that might mislead readers to understand that PEQA can either match or surpass QAT methods. However, note that our intention in juxtaposing PEQA with this basic QAT approach in Table 2 was to justify keeping the integer matrix in a frozen state rather than the superiority of PEQA compared to QAT methods. To establish a reliable baseline, we provide full-precision PEFT baseline results, LoRA (higher upper bound), in Table 3. [**Weakness3 & Question2**] In the case of zero-shot, PEQA's accuracy matches LoRA, while the improvement becomes less pronounced when transitioning to five-shot accuracy. The results in Table 4 & 5 bring forth an interesting phenomenon often termed as the "Alignment tax" [1,2,3]. The concept denotes the potential compromise in performance on traditional NLP tasks as a trade-off for enhanced instruction-following or alignment capabilities. Specifically, as the instruction-following performance heightens, as reflected in Table 5, there's a conceivable penalty that can be observed in tasks like common sense reasoning. This effect, the Alignment tax, is especially more noticeable in smaller models. Piecing everything together, our overarching claim is to emphasize that even after fine-tuning large language models (LLMs) in a compressed state via the PEQA method, the performance in logical reasoning and in-context learning stands its ground. [1] Training language models to follow instructions with human feedback. [2] A general language assistant as a laboratory for alignment. [3] Training a helpful and harmless assistant with reinforcement learning from human feedback. [**Weakness4 & Question1**] (Please refer to the reply of Weakness 2) We recognize the significance of theoretical aspects of PEQA. We plan to show theoretically why updating only the quantization scales is sufficient as a future work. Given that LoRA outperforms LoRA+OPTQ for LLaMA-13B in Table 5, the experimental result where LoRA+OPTQ outperforms LoRA for LLaMA-7B can be regarded as an exceptional occurrence. Yet, PEQA performs better than LoRA for both LLaMA-7B and LLaMA-13B in Table 5, which might be because the number of trainable parameters in PEQA would be more appropriate than that in LoRA (QKVO16) for acquiring the instruction-following ability when fine-tuning a model with an instruction-following dataset (e.g., Alpaca). Similar to the observation that LoRA can be better than full fine-tuning despite possessing fewer learnable parameters, even though the number of learnable parameters in LoRA (QKVO16) is six times more than that in PEQA as seen in Table 7, PEQA would be better suited for developing the instruction-following ability than LoRA. [**Weakness5**] We kindly remind you that the core objective of PEQA is twofold: achieving memory efficiency during fine-tuning and benefiting from acceleration and quick-task switching during the deployment phase for text generation inference. By employing quantization during fine-tuning, PEQA ensures that weights are preserved in their integer format post fine-tuning. This approach not only tackles the memory constraints during fine-tuning but also addresses the memory-bound issue during text generation inference. As PTQ is applied after the fine-tuning process, PEFT+PTQ doesn't offer the advantages of memory-efficient fine-tuning. [**Weakness6**] We have performed an additional experiment with LoRA+OPTQ to better elaborate the level of granularity. As pointed out by the reviewer, particularly with LLaMA 13B, we observe that the performance of LoRA+OPTQ improves as the process becomes more fine-grained. However, we continue to see that PEQA consistently demonstrates superior performance when compared to LoRA+OPTQ in Table A(in PDF). [**Question3**] Parameters in the order of millions have a negligible impact on a model size measured in GBs. When quantizing LLaMA-7B with a group size of 256, a single linear layer has approximately 16 times more learnable scales. When this is scaled linearly, the channel-wise learnable parameter count, which initially was 1.36M, increases by 16 times to reach 21.76M. Yet, this only results in an additional 40MB in memory usage. --- Rebuttal Comment 1.1: Title: Thank you for the responses. Comment: I appreciate the authors for the detailed responses and additional experimental results. Although some of the answers do not sufficiently answer the questions, I believe the additional experimental results, as well as the in-depth discussion, would be valuable for future research in this field. Therefore, I raise my original rating. --- Reply to Comment 1.1.1: Comment: We are pleased to note that some of your concerns have been addressed. Your feedback has been instrumental in this process, and we sincerely extend our gratitude for your invaluable insights. Issues that we haven't managed to handle adequately will be designated as future work, underscoring our commitment to continued refinement. We will ensure that your comments are incorporated into the revised manuscript and appreciate the guidance you've provided for our future research endeavors.
Rebuttal 1: Rebuttal: Dear Reviewers, We wish to express our sincere gratitude for your diligent review and insightful feedback on our manuscript. Your comments have greatly enriched our understanding and enabled us to identify areas where further clarification and improvement were needed. We have uploaded this summary table in a one-page PDF format in the review system. Thank you once again for your time and contributions. We look forward to any further comments or suggestions you may have, and we remain at your disposal for any additional information or clarification. With kind regards. Pdf: /pdf/c441c1282fd7e1462819e15c10874376c120f4d9.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
A Combinatorial Algorithm for Approximating the Optimal Transport in the Parallel and MPC Settings
Accept (poster)
Summary: This paper gives the first parallel, combinatorial algorithm to approximate the optimal transport distance. Moreover, the algorithm nearly matches the run-time of the best known parallel algorithms for the problem, with expected parallel runtime of O(log(n)/eps^2) (for eps*n the additive error of the OT cost). The motivation for this work comes from the fact that the previous best theoretical guarantee for this problem—an O(log n/eps) runtime parallel algorithm by Jambulapti et al (Neurips 2019)—is not really implementable. Thus practitioners normally use the much easier to implement Sinkhorn algorithm, which has parallel complexity \tilde{O}(log n/eps^O(1)). Additionally, while there is another combinatorial for the sequential problem, it was not able to be parallelized. Further, in the MPC setting, the authors show that one can execute their algorithm in the MPC setting with O(log log (n/eps)/ eps^2) rounds with O(n/eps) memory per machine. Lastly the authors implement their algorithm to exploit GPU parallelism and show that in nearly all settings reported, they beat the Sinkhorn algorithm. Strengths: - I find the motivation for this paper to be very interesting. While it is in theory important that we push worst-case guarantees (such as the Jambulapti et al work), it is important to design algorithms that are implementable and still have good results. This combinatorial, parallelizeable algorithm certainly does this. - As someone more combinatorially minded, I liked the algorithm. It’s basically a push-relabel type algorithm. - Empirically, the proposed algorithm does well enough compared to the Sinkhorn algorithm that it might even be that the analysis in the paper is a bit lossy. Empirical evidence was convincing enough for me. - Well Written Weaknesses: - In terms of raw numbers, there’s no big theoretical improvement. (But again, that’s not the main point of the paper.) - The presentation in the paper could be better at times. To be clear, the writing is very very good, but the algorithm is so simple in a nice way, that I think they could’ve made points clearer. I will elaborate in comments below. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Comments to the authors: - Remove sentence in line 21-23, you already say this at a more relevant part later - R_{>0} -> \mathb{R}_{>0} in line 30 - Line 72 “achieve a improved” -> “achieve an improved” - Is the Sinkhorn runtime tight around O(log n/eps^O(1))? Or empirically is it better than that guarantee? Also can you explicitly state whatever constant the O(1) term is hiding at least once? - Line 179 “which will has a ” -> “which has a ” - This algorithm is so reminiscent of push-relabel that I kept looking for more explicit direct comparisons to push-relabel in section 2. In fact step 2 of the algorithm is just an augmenting path and you keep an eps-feasible dual, similar to the heights in the push-relabel algorithm. Then even your proof of correctness is the same as push-relabel. Can you say more about the inspiration from push-relabel? - You should add a figure for the algorithm, maybe 3 sub-figures, one for each of the 3 steps - Line 308 “we have describe our” -> “we have described our” - I think the lines you refer to in algorithm 1 are all messed up. Double check me on this but I think where you say “Lines 5-9” should be “Lines 5-7”, where you say “Lines 10-13” should be “Lines 8-10”, and where you say “Lines 14-20” should be “Lines 11-15”. - Lime 694 in appendix needs to be rewritten Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: na Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *Is the Sinkhorn runtime tight around $O(\log n/\varepsilon^{O(1)})$? ... Also can you explicitly state whatever constant the $O(1)$ term is hiding at least once?* Over time, the analysis of the Sinkhorn algorithm has seen multiple improved bounds to its $\varepsilon$ dependency. To our knowledge, the best bound is given by Dvurechensky et al. (see below for citation), which gives a sequential $\tilde{O}(n^2/\varepsilon^2)$ bound, and is easily parallelizable to $\tilde{O}(1/\varepsilon^2)$, basically matching our algorithm's provable bounds. (We use $\tilde{O}$ to hide logarithms here). We will edit the paper to clarify this. *Or empirically is it (Sinkhorn) better than that guarantee?* This seems to be the case, based on our results, and prior work with Sinkhorn that we have seen. Our algorithm seems to also outperform its own theoretical expectations. *This algorithm is so reminiscent of push-relabel that ...* Thank you for pointing this out. Indeed, our algorithm is using the popular push-relabel framework and our proof of correctness is also based on the standard feasibility conditions for matchings. Our main technical novelty is in the efficiency analysis of our algorithm. Using the fact that edge costs are small integer multiples of $\varepsilon$, we bound the number of push-relabel iterations by $1/\varepsilon^2$. This allows us to achieve a parallel complexity of $O(\log n/\varepsilon^2)$. We will include this in the discussion of "Our Approach" (L131-141). Dvurechensky, P., Gasnikov, A., Kroshnin, A. (2018, July). Computational optimal transport: Complexity by accelerated gradient descent is better than by Sinkhorn’s algorithm. In International conference on machine learning (pp. 1367-1376). PMLR. --- Rebuttal Comment 1.1: Comment: Thank you for the response. My score remains the same for now, but I will continue to monitor the discussion during this response period.
Summary: This paper presents a novel combinatorial algorithm that finds an $\epsilon$-optimal transport plan for the optimal transport problem. Additionally, it introduces a variant of this algorithm in the MPC model, exhibiting an expected $O(\log\log n)$ communication rounds, with each machine having a memory of $O(n)$ order. This algorithm offers speedups over previous approaches, which required at least $\Omega(\log n)$ rounds, in the large-$n$ scenarios. Furthermore, the authors provided a GPU-friendly interpretation of their algorithm utilizing matrix operations, and supported their results using numerical experiments. Strengths: The primary contribution of this paper is a combinatorial algorithm for approximate optimal transport, which exhibits efficient parallelization and significantly outperforms previous algorithms in the Massive Parallel Computation (MPC) model with respect to $n$, the total support size of the input distributions. The algorithm is simple and elegant, allowing for a GPU-friendly interpretation based on matrix operations, thereby enhancing its competitiveness in real-world scenarios. The analysis of the algorithm is relatively straightforward and easy to follow, while the conducted numerical experiments provide compelling evidence for its effectiveness. Weaknesses: 1. This paper only improves upon previous works in the MPC model rather than the broader parallel settings. In the context of the parallel time, both this work and previous algorithms based on Sinkhorn-Knopp methods are of order $O(\log n)$. Consequently, this limitation might restrict the applicability of this algorithm in real-world scenarios with different parallelization settings. 2. Compared to previous works, this algorithm has a larger dependence on $\epsilon$. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Correspondingly, I have two questions that, if addressed, could make the results even stronger: 1. Using similar combinatoric techniques, is it possible to obtain an algorithm that has better performance than previous works in parallel time? 2. Can further improvements be made to reduce the dependence on $\epsilon$, or are there practical reasons why the focus remains primarily on the dependence on $n$? Minor comments: 1. It would be good to have a discussion on open questions. 2. Page 2, line 40: "one" -> "when" 3. Please check the format of reference [16,22,23,24]. In particular, the title of [16] and the author lists of [22-24] are not presented correctly. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Not relevant in my opinion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *Using similar combinatoric techniques, is it possible to obtain an algorithm that has better performance than previous works in parallel time?* We believe the improving our parallel OT epsilon dependency from $O(1/\varepsilon^2)$ to $O(1/\varepsilon)$ is plausible, but difficult – we have spent considerable time trying to do so. Designing such an algorithm remains an interesting open question. *Can further improvements be made to reduce the dependence on $\varepsilon$, or are there practical reasons why the focus remains primarily on the dependence on $n$?* Typically, in many applications, one may be happy with obtaining OT-cost with a precision of two decimal points, i.e., $\varepsilon = 0.01$. In contrast, the input size may vary by large numbers. For instance, an image with $n$ pixels can be treated as a distribution with $n$ points in its support. The number of pixels in different images may vary by a lot. Therefore, typically, it is desirable to have algorithms with better dependence on the input size as opposed to a better dependence on the error parameter. *It would be good to have a discussion on open questions.* Yes, we can add one using the extra page of space in the final version. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the detailed rebuttal. I remain my rating.
Summary: The paper presents a parallel combinatorial algorithm for the optimal transport (OT) and minimum weight perfect matching. The algorithm computes an additive epsilon approximation. The algorithm runs in O(n^2 / epsilon^O(1)) time and (in a straightforward way) gives an algorithm running in O(log log n / epsilon^O(1)) MPC rounds, using linear space per machine. The paper also compares a GPU implementation of the algorithm with the Sinkhorn method showing improved running time. Strengths: * The paper gives a simple and clean algorithm for a practically relevant problem. * The algorithm delivers good speedups over the baseline Weaknesses: The main weakness of the paper is a very limited empirical evaluation. The 1.5-page long section on empirical evaluation contains mostly the dataset descriptions (which in my opinion could easily go to the supplementary material if space is a concern) + a short running time comparison to a single baseline. I'd like to see a more in-depth comparison, for example (I'm trying to suggest ideas here, not define a list of requirements): * a comparison between CPU implementations * analysis of the number of rounds used * evaluation on non-dense graphs * quality analysis: what is the actual absolute / relative error as a function of epsilon Minor suggestions: * I don't think describing the Israeli-Itai algorithm is a good use of the space (I'd suggest removing it), given that the algorithm ends up using a different idea (which I consider folklore) Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. What is the work and depth (aka span) of the algorithm described in 4.2? (in the shared-memory work-depth model) 1. What is the reason for not including DROT results in section 5? How does DROT compare to Sinkhorn? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *The paper presents a parallel combinatorial algorithm for the optimal transport (OT) and maximum weight matching.* We believe that the reviewer has misunderstood the problem being solved in this paper. We present an algorithm for the *minimum* weight matching (i.e., the assignment problem) which is a significantly harder to approximate than the *maximum* weight matching. For instance, the greedy algorithm is known to provide a $1/2$ approximation for the maximum weight matching problem. However, for minimum weight matching, the greedy approach has an unbounded approximation ratio and, even for points on a line, the approximation ratio improves only to $\Omega(n^{0.59})$ (see citation below). Given the reviewer's misunderstanding of the problem, we believe they might have given an inaccurate evaluation of the paper. *It is possible to black-box reduce maximal matching to $(1+\varepsilon)$ approximate matchings in MPC (see Corollary 1.3 of ...* Again, this result is for approximating the maximum weighted matching, which, as we discussed previously is a different problem than minimum-weight matching. *I'd like to see a more in-depth comparison, for example ... CPU implementations.* We have already provided a CPU comparison results for our sequential algorithm; see appendix section C.3. *Analysis of the number of rounds used ...* This is a good suggestion, one that we should have included, because it gives an implementation-independent gauge of algorithm convergence. We reran all of our experiments. For OT, we found that our algorithm consistently executes something like 10x fewer iterations than Sinkhorn. For assignment problem, it is more mixed. This corresponds to what we observe in our actual running times, and demonstrates that our speedup is intrinsic, rather than simply a matter of implementation. For the final version of the paper, we will add iteration counts for all experiments. *Evaluation on non-dense graphs ...* The OT and assignment problems have been formulated on complete graphs. So, testing the algorithm on sparse graphs is not applicable for the problem. *Quality analysis ...* In each of our experiments, we already do compare the actual errors produced by the algorithms involved, not just the input values of the error parameters. In the Sinkhorn comparisons, we select the error parameter input value so that the actual error produced by Sinkhorn is within 1E-6 of our algorithm’s actual error. Thus, the current comparison is fair in terms of the actual produced solution quality. However, we understand that including concrete values for the actual errors produced in comparison to the optimal may still be valuable information, and we will update the figures to include this information. *What is the work and depth (aka span) of the algorithm described in 4.2?* As mentioned in Section 4.1 L262--266, plugging in the parallel algorithm by Israeli-Itai to our algorithm from Section 3 leads to a parallel implementation with a depth of $O(\log n/\varepsilon^2)$. The work is dependent on the total work done by all invocations of the Itai Israeli algorithm. While they do not specifically give a bound on work, they do assume at most $|E|$ processors are used, so the work is $|E| \log |E|$, where $E$ is the set of edges in the admissible graph. Using equation (4), the total work can therefore be bounded by $O(n^2 \log(n) / \varepsilon)$. The modifications presented in Section 4.2 are motivated by a need for practical implementation on the GPU. In particular, we adapt and simplify the Israeli-Itai's algorithm for general graphs to the bipartite setting. Although, we haven't formally analyzed the span of this practical implementation, we believe it will be the same as that of the standard implementation. After a thorough check, we will include a discussion on the span of this implementation in the next version of the paper. *What is the reason for not including DROT results in section 5? How does DROT compare to Sinkhorn?* We have included a comparison of DROT with our algorithm in appendix section C.2 and our algorithm generally outperforms DROT. The implementation of DROT is in CUDA whereas our algorithm as well as Sinkhorn are implemented in PyTorch. CUDA implementations are faster than those on PyTorch, and despite this, our algorithm generally outperformed DROT. Nonetheless, due to a difference in implementation platform, we decided not include these results in Section 5 but to move it to the appendix. For the same reason, we decided not to compare DROT against Sinkhorn. Citation : Reingold, E. M., Tarjan, R. E. (1981). On a greedy heuristic for complete matching. SIAM Journal on Computing, 10(4), 676-681. --- Rebuttal Comment 1.1: Title: Thank you for the feedback Comment: Thank you for the response. I'm sorry for confusing the maximum with minimum weight variants, thank you for pointing this out. I think the paper should be accepted and will update the review accordingly.
Summary: The paper considers the Optimal Transport problem, which is a metric for the distance between distributions, used by some ML/DL methods. It presents a combinatorial algorithm for that problem, which can be parallelized and implemented efficiently both in map-reduce systems (MPC) and on a GPU. It also addresses a special case of that problem, the assignment problem. The paper proves that the worst-case run-time complexity of the presented algorithm is O(n^2/ \epsilon^2) (for assignment O(n^2/ \epsilon)) in sequential time. The parallel time is O(\log n / \epsilon ^2). This improves upon the Sinkhorn-Knopp method, which is the state-pf-the-art method for this problem used in the Python optimal transport package. Although this run-time does not improve upon the theoretic results of Jambulapati et al., the authors explain that their algorithm is difficult to implement and not used. For MPC the paper shows that O(\loglog n / \epsilon ^2) rounds are required with O(n) memory per machine, the first sub-logarithmic combinatorial algorithm for that problem. Finally, the paper presents experimental results on a GPU, showing significant run-time improvements (sometimes above 10x) over the Sinkhorn-Knopp method for six datasets in most of the relevant ranges of epsilon (two for assignment, four for optimal transport). Strengths: The paper considers an interesting problem and clearly presents a new algorithm for solving it, showing both theoretic and experimental results. The new results improve upon the previous methods published in NeurIPS several years ago. The algorithm creatively combines and modifies existing ideas. The presented results are comprehensive, addressing different systems on which the algorithm may run. The special case of the assignment problem is interesting on its own. Weaknesses: The paper has several weaknesses: - The code used for the experiments was not provided, so the results cannot be reproduced and might not benefit the community. - The algorithm of Jambulapati, which is the best asymptotically based on theoretical analysis (for parallel time), was not a part of the experimental comparison. - The experimental results use "toy examples" of datasets with some ranges of epsilon. It is unclear if the presented algorithm is the best choice for practical scenarios, such as those in the cited papers that use optimal transport (or whether linear programming or the exact algorithm could be the best choice there). - Although this paper improves upon papers published in NeurIPS several years ago, it may have a bigger relevant audience in conferences focused on combinatorial algorithms. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Questions to the authors: - Can you share the code, to enable reproducibility and benefit the community? - Can you present an experimental comparison with the algorithm of Jambulapati et al.? - Can you present an experimental comparison in one of the real scenarios in which optimal transport was used in ML/DL? (e.g., from one of the papers cited) Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, I don't see an issue regarding this. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *The code used for the experiments was not provided...* Actually, our code was provided as part of the supplement. You can find it in the provided zip file, alongside the appendix PDF. The code also includes a detailed README that explains how to run the experiments, specifically meant to facilitate reproducibility. *The algorithm of Jambulapati ... was not a part of the experimental comparison.* Although the work by Jambulapati et al. gives the asymptotically fastest bound in theory, there is good reason to believe that the overhead of the algorithm causes it to be less efficient than Sinkhorn in practice. Lin et al. [21] agree with this assessment, stating: "Despite the theoretically sound complexity bound, the lack of simplicity and ease-of implementation make this algorithm less competitive with Sinkhorn and Greenkhorn which remain the baseline solution methods in practice.'' Even the Jambulapati et al. paper itself does not claim to outperform Sinkhorn. Overall, since Sinkhorn remains a better baseline in practice, we did not see a need to compare to the Jambulapati et al. Furthermore, we are not aware of any publicly available GPU implementation of their algorithm. *Can you present an experimental comparison in one of the real scenarios in which optimal transport was used in ML/DL?* This was the intent of our current NLP experiments, which are similar in principle to the so-called `Word Movers Distance', a very well known method of comparing documents. See the paper ``From Word Embeddings To Document Distances'', which appeared in the 32nd International Conference on Machine Learning (ICML) and has over 2000 citations. --- Rebuttal Comment 1.1: Title: Thank you Comment: Thank you for your response, which indeed addresses my questions and comments. In view of concerns raised by other reviewers I currently keep my rating, but I will follow the discussion and consider raising it.
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their helpful suggestions. The reviewers pointed out some minor grammatical and presentation changes; we appreciate these suggestions and will address them in the final paper version. For questions and comments that warrant a more significant discussion, we have addressed our responses directly to the relevant reviewer. When addressing such questions or comments, we will quote directly from the reviewer, using italics, in order to concisely reference the relevant portion of the review.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Neuro-symbolic Learning Yielding Logical Constraints
Accept (poster)
Summary: This paper takes inspiration from convex and bilevel optimization to derive a learning algorithm that simultaneously learns neural network perception and rules on the perceptions. The authors derive an efficient training algorithm involving multiple minimization steps. Post rebuttal: I thank the authors for addressing the concerns. I will keep my score, this is a good paper, and I am happy error bars are included in the rebuttal. Strengths: The paper tackles the genuinely very hard problem of learning rules and perception simultaneously. The performance on, from what I can tell, quite challenging tasks is great. Ablations show multiple complex techniques help with performance. Furthermore, the paper has several proofs on convergence and optimality. The method itself has many (original) interesting ideas, although there are many. Weaknesses: While the paper is decently written, there are a lot of technicalities to the method that makes understanding the high-level picture challenging. It relies on many optimization methods: Cardinality constraints, difference of convex programming, proximal point algorithms and trust regions, not to mention quite a few hyperparameters. Some equations fly in without a clear justification, and glancing at the appendix does not help me either. Furthermore, the paper introduces a hyperparameter alpha that significantly complicates the computation without actually using it. The main goal of this paper is to yield logical constraints, but whether they are also interpretable is not evaluated. The paper does not have error bars on experiments. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - An SMT solver is mentioned multiple times of which in Figure 1. Is my understanding correct that it is only used during inference, not training? - Would the method also work by taking the cross entropy for the $\ell_1$ loss? - Page 4, 131: Why is it allowed to rewrite the objective introducing alpha like that? - Proposition 1: Explain what $\mathbf{e}$ is. - Page 5, 154: The linearization uses $(e-2u)^Tu$, but further down (line 172) it is $(e-w)^Tw$. Where did the 2 go to? - Same question for neural network training. Also, I think there's something wrong with the braces there. - To what values do you set $m$ (number of rules)? - Appendix, table 1: You mention that $\alpha$ is fixed to 1, meaning the influence from the rules is ignored by symbol grounding. Why did you choose this setting? $\alpha$ significantly complicates your mathematics. Setting it to 1 would simplify the equations significantly. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 4 excellent Limitations: Inference of this method is very expensive, as highlighted in the limitations section. I'd have preferred this in the main paper, not in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **Response to Reviewer MqQr** Thanks for the comments. **The role of SMT solvers in our framework:** Yes, SMT solvers are only used during inference. **Replace $\ell_1$ loss by cross entropy:** Yes, we can directly change equation (6) to the gradient descent of cross entropy loss, and it should also work. This is not only implied by theory, but also observed by our preliminary experiments. **The derivation of the equation in Line 131:** The original constraint in network training is $\bar{\mathbf{z}} = \arg\min\nolimits_{\mathbf{z} \in \mathcal{Z}}~ \ell_2(h_{\boldsymbol{\boldsymbol{\phi}}}(\mathbf{z},\mathbf{y}), 1) := \\|\boldsymbol{W} (\mathbf{z}; \mathbf{y}) - \boldsymbol{b}\\|^2$. However, this constraint is intractable due to the ill-conditioned $\boldsymbol{W}$. Hence, we introduce the network prediction as a guidance, forming the final equation in Line 131. **Explain $\boldsymbol{e}$:** $\boldsymbol{e}$ is an all-one vector, and we will clarify this in the revision. **The derivation in Line 154 and neural network training equation:** Thanks for pointing them out. They are typos (we also confirm that our submitted code is correct). We will carefully checked the paper again and correct them in the revision. **How to set $m$ (number of rules):** In our experiment, we directly set a large enough $m$ to learn all possible logical constraints. For example, we set $m = 2000$ in the visual SudoKu solving task, where only 324 constraints are required. Our method returns 324 unique constraints and the rest are redundant ones. We will include it in the revision. **Hyperparameter $\alpha$:** Sorry for the confusion due to the inconsistency of $\alpha$ in the main text and the Appendix. In the main text, we use the following formulation for symbol grounding problem: \begin{equation} \bar{\mathbf{z}} = \arg\min\nolimits_{\mathbf{z} \in \mathcal{Z}}~ (1-\alpha) \\|\boldsymbol{W} (\mathbf{z}; \mathbf{y}) - \boldsymbol{b}\\|^2 + \alpha \\|\mathbf{z} - f_{\boldsymbol{\theta}}(\mathbf{x})\\|^2. \end{equation} In appendix, we instead use \begin{equation} \bar{\mathbf{z}} = \arg\min\nolimits_{\mathbf{z} \in \mathcal{Z}}~ \\|\boldsymbol{W} (\mathbf{z}; \mathbf{y}) - \boldsymbol{b}\\|^2 + \alpha \\|\mathbf{z} - f_{\boldsymbol{\theta}}(\mathbf{x})\\|^2. \end{equation} $\alpha = 1$ is set for the latter, which is equivalent to $\alpha = 0.5$ for the former (where the network prediction and constraints satisfaction are considered equally). We will correct them in the revision. **The interpretable logical constraints:** For visual SudoKu solving task, a logical constraint is in the form of "x_111 + x_121 + x_131 + x_141 + x_151 + x_161 + x_171 + x_181 + x_191 == 1", meaning that there should be a "1" in the first row (where x_123 means the entry of 1st-row, 2st-column is 3). We will add some learned logical constraints for both tasks in Appendix G4. **Error bars on experiments:** We briefly provide some results of total board accuracy as reference, and will include the full version in the revision. | | SATNet dataset | RRN dataset | SATNet $\to$ RRN | RRN $\to$ SATNet | | ------- | ----------------: | ----------------: | ----------------: | ----------------: | | SATNet* | 67.3 ($\pm$ 2.36) | 0.1 ($\pm$ 0.10) | 1.4 ($\pm$ 0.14) | 0.0 ($\pm$ 0.0) | | L1R32H4 | 90.5 ($\pm$ 2.85) | 65.7 ($\pm$ 6.97) | 21.3 ($\pm$ 3.48) | 94.5 ($\pm$ 1.65) | | Ours | 95.6 ($\pm$ 0.31) | 92.8 ($\pm$ 0.73) | 93.9 ($\pm$ 0.40) | 95.2 ($\pm$ 0.91) | **Limitation section:** In the revision, we will summarize the limatations into the main body of the paper as suggested. --- Rebuttal Comment 1.1: Comment: Thanks for addressing the concerns. I will keep my score, this is a good paper and I am happy error bars are included in the rebuttal.
Summary: This work provides a neural framework that combines network training, symbol grounding, and logical constraint synthesis. It utilizes the cardinality constraints to express the logical constraint learning and a DC penalty for constraint relaxation. The evaluation demonstrates that this method outperforms state-of-the-art models including both SATNET and L1R32H4 by a large margin. Strengths: Originality: 4/5 Pros: This methodology introduces two novel important components, DC relaxation loss, and cardinality constraints, into the numerical learning framework. Cons: One interesting component in this paper, mapping the learned numerical rules into the symbolic space is not quite clearly explained. By studying the related work section, I think the "Softened Symbol Grounding for Neuro-symbolic Systems"[1] seems quite related to the rule extraction part. I feel this work is probably not emphasized enough in the paper's related work section. Quality: 4/5 Pro: The theory and properties come with all the proofs explained in the appendix, which is quite convincing. This work also reaches better than the SOTA performance for the two experiments. Cons: The SATNET* uses Distilled LeNet as its underlying perceptual model, while you have used a recurrent transformer model. This seems an unfair comparison. Clarity: 3/5 Pros: The math component is nicely defined and with well-explained definitions. Cons: It is hard to understand the process of extracting a constraint from its numerical form ($\mathbf{w}$ and $\mathbf{b}$). Significance: 4/5 Pros: This work improves from the existing work by learning constraints along with ground truth perception, which is an important task in neural symbolic learning. Cons: The learned constraints are in boolean form and thus not high level enough to generalize across different task variants. [1] Li, Zenan, et al. "Softened Symbol Grounding for Neuro-symbolic Systems." The Eleventh International Conference on Learning Representations. 2022. Weaknesses: See Strength. Minor comments: 1. Line 91: There are two papers coming from different groups, whose first author's names are both Li. It seems that the two works come from the same person in the article. 2. I would recommend putting the limitation section into the main paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What is the limitation of using cardinality constraint? Is there a fundamental limitation other than the tricky usage where introducing auxiliary variables is required? 2. How are the 324 cardinality constraints calculated? 3. Suppose the number of the constraint is unknown for a new task, what should a user do? How will that impact the performance? 4. How to interpret a boolean-based constraint to an average programmer? I am happy to raise my score if these questions are addressed. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Suggestions included in the Question section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **Response to Reviewer MGh8** Thanks for the comments. Following are our responeses. **Limitation of cardinality constraint:** Essentially, cardinality constraints can represent any propositional logic formula, i.e., they have the same expressiveness with CNF (Conjunctive Normal Form) or DNF (Disjunctive Normal Form) of propositional logic. However, when learning cardinality constraint, some biases do exist (e.g., bounding box $[b_{min}, b_{max}]$ should be tight) which may result in the incompleteness of the learned constraints. **The calculation of 324 cardinality constraints:** A logical constraint "x_111 + x_121 + x_131 + x_141 + x_151 + x_161 + x_171 + x_181 + x_191 == 1" means that there should be a "1" in the first row (where x_123 means the entry of 1st-row, 2st-column is 3). We have 9 such constraints for the first row, and we have 9 rows, resulting a total of $9\times 9=81$ constraints for rows. We also have 81 constraitns for columns and blocks, respectively. There are also $9 \times 9=81$ constraints requiring that each cell should be in $1-9$. Therefore, the total number of constraints is $81 \times 4=324$. For our method, we initially set a large number, and then remove the redundant constraints, resulting exactly 324 contraints confirmed by manual checking. **How to determine the number of constraints:** One can directly set a sufficiently large $m$ (number of constraints), and we observe that a large $m$ does not ruin the effectiveness of our method. For example, we set $m = 2000$ in the visual SudoKu solving task, where only 324 constraints are required. Our method returns 324 unique constraints and the rest are redundant. A more efficient way is initially setting a large $m$ to estimate the actual number of logical constraints needed, and then adjusting it for more efficient training. We will include a brief discussion on the setting of $m$ in the revision (Appendix F). **Interpret a boolean-based constraint to an average programmer:** A reasonable method is to translate the Boolean-based constraints into a CNF formula, and we think it is more friendly for an average programmer to understand such a CNF formula. The tranlsation is studied for a long time and can be automatically conducted by off-the-shelf SMT solvers [1, 2]. **Relation to the paper “Softened symbol grounding…”:** We will highlight the relation in the revision as suggested. **Compare with SATNet using LeNet architecture:** We agree that it could be unfair to compare with SATNet* considering different architectures. As an ablation study, we use LeNet-5 model in our method, deriving the results of total board accuracy as follows. | | SATNet dataset | RRN dataset | SATNet $\to$ RRN | RRN $\to$ SATNet | | ---------------- | ---------------: | ----------: | ---------------: | ---------------: | | SATNet* | 67.3 | 0.1 | 1.4 | 0.0 | | Ours (LeNet-5) | 75.2 | 79.6 | 82.4 | 70.6 | We also observe that our method successfully learned full logical constraints, but the accuracy of perception module degrades from 99.8% to about 99.1%, leading to the drop of overall accuracy. **Misleading citation in Line 91:** We will correct it in the revision. **Limitation section:** In the revision, we will summarize the limatations into the main body of the paper. [1] Sinz, C. (2005, October). Towards an optimal CNF encoding of boolean cardinality constraints. In International conference on principles and practice of constraint programming (pp. 827-831). Berlin, Heidelberg: Springer Berlin Heidelberg. [2] Eén, N., & Sörensson, N. (2006). Translating pseudo-boolean constraints into SAT. Journal on Satisfiability, Boolean Modeling and Computation, 2(1-4), 1-26. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns.
Summary: The authors propose a fusion of neural network and symbolic domain via logical constraints to learn specific vision tasks in a weakly supervised way. They break it down to two optimization problems for neural and symbolic domains. The logical constraints are solved using a deterministic solver and grounded softly to the neural component via a latent space $z$. Strengths: * The neuro-symbolic approach of solving tasks that require some planning is always an interesting direction. * The results on the two domains studied in the paper are quite strong, indicating that such fusion can indeed converge well Weaknesses: * The paper needs more high level figures to help understand the exact logical constraints being enforced for both the domains. This will significantly improve the readability. * Please add some discussion about the limitations of using the specific cardinality-based logical constraints you chose in terms of where it might be applicable vs where it cannot. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Why do you chose these specific tasks? Is it following some prior works? Please make it clear. This will help understand the scope of the work in terms of applicability to different domains. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: * The authors don't have any limitations section. Please refer to weakness for recommendations on this. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **Response to Reviewer w2hx** Thanks for the comments. **The motivation of tasks:** The visual SudoKu solving task is a standard and commonly-evaluated task from existing neuro-symbolic learning methods. The self-driving planning task is proposed by ourselves. Specifically, we aim to introduce this more practical task for the neuro-symbolic learning community. We will further clarify this in Appendix G3 and G4. **Limitation:** We discussed the limitations in Appendix A. In the revision, we will summarize the limitations into the main body of the paper. --- Rebuttal Comment 1.1: Title: Response to authors Comment: Thanks for clarifying my concerns.
Summary: The paper proposes a new neurosymbolic approach for learning symbolic representations and logical constraints on top of these simultaneously. The authors propose a new penalty term based on Difference of Convex (DC) programming in order to relax the optimzation problem. The performance of the proposed approach is demonstrated on the visual sudoku challenge and a path planning environment based on the Kitti and NuScene datasets. Overall, the method performs better than existing works and achieves results close to a fully supervised architecture in some cases. Strengths: The paper takes a principled approach to the problem of learning symbolic representations and logic rules simultaneously. The proposed DC programming approach allows for the theoretical analysis as per Theorem 1 demonstrating that gradual increase of the weight of the DC penalty term results in the desired stationary point. Also, the reported results demonstrate strong improvement over other neurosymbolic approaches in the two considered environments of the visual sudoku and path planning. Weaknesses: The performed experiments focus on 2 relatively simple and static environments. Moreover, one of the core motivations behind neurosymbolic approaches is the ability to learn faster and/or transfer knowledge to new environments, however none of these questions have been addressed by the conducted experiments. Overall, I think the paper would benefit from making the methodology section more concise and to the point and expanding more on the results. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - What does the notation (z;y) mean exactly in w^T(z;y)? Is it some form of concatenation? - You mention that the solvers can find a solution even with incorrect perception. Why do you think this happens? Do you think that using more powerful solvers can actually be a problem for the perception side of things? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The paper does not discuss the limitations of the proposed methodology which is important in order to identify meaningful directions for future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **Response to Reviewer X5tc** Thanks for the comments. **The challenge of two tasks:** The visual SudoKu solving task is a standard and commonly-evaluated task in existing neuro-symbolic learning methods. Particularly, only 17 out of 81 cells are initially filled in some of the SudoKu boards of the RRN dataset, and the difficulty of solving such a SudoKu can be well understood in the Appendix of [1]. Furthermore, we try to explore some real applications of neuro-symbolic learning via introducing the self-driving planning task. In this task, we extract two main modules (i.e., object localization and path planning) in self-driving systems, which is suitable for evaluating neuro-symbolic learning method (object localization is the neural part and path planning is the symbolic part). We would also like to point out that, to the best of our knowledge, the genuine end-to-end learning of  neuro-symbolic systems has not been achieved before, even for simple (exemplar) tasks such as those used in the evaluation. This paper shows that it can be made feasible with a natural and disciplined approach, including a game theory framework connecting neural and symbolic learning, as well as more techical approaches such as trust-region penalty (to prevent degeneracy), and DC relaxation (to preserve logic exactness). **Knowledge transfer to new environments:** We agree with the reviewer on the core motivations behind neuro-symbolic approaches. Actually, for knowledge transfer, we have conducted an experiment on transferring the learned rules from the SATNet dataset to the RRN dataset, and vice versa. Note that these two datasets vary significantly in terms the difficulty levels of Sudoku solving tasks (RRN dataset is more challenging; see Section 4.1). Our experimental results (the third-column in Table 1 and Table 6 in Appendix G.3) show that, when applying both model and learned rules from SATNet (RRN) data to RRN (SATNet) data, our method is consistently effective, while the performance of existing methods drops dramatically. **Notation $(\mathbf{z};\mathbf{y})$:** We represent the column concatenation of two vectors $\mathbf{z} \in \mathbb{R}^m$ and $\mathbf{y} \in \mathbb{R}^n$ by $(\mathbf{z}; \mathbf{y}) \in \mathbb{R}^{m+n}$. We will clarify it in the revision. **The MAXSAT solver can find a correct solution despite incorrect perception:** The MAXSAT solver returns the solution achieving the highest number of satisfied constraints. Therefore, even with perception errors causing the full set of logical constraints unsatisfiable, MAXSAT solvers may still output the correct results by satisfying a subset of the logical constraints. **Limitation:** We discussed the limitations in Appendix A. In the revision, we will summarize the limatations into the main body of the paper. [1] Palm, R., Paquet, U., & Winther, O. (2018). Recurrent relational networks. Advances in neural information processing systems, 31.
Rebuttal 1: Rebuttal: ### **General response to reviewers** We would like to thank all the reviewers for their kind and helpful feedback. We start by clarifying that we have indeed discussed the limitations of our approach which is included in Appendix A. We will summarize and include them in the main paper. We thank the reviewers for the suggestions on improving readability. We will give more intuitive explanations of the core ideas and the results in the revised paper, and move some technical details to the Appendix.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Estimating Propensity for Causality-based Recommendation without Exposure Data
Accept (poster)
Summary: This paper presents a review of a novel method proposed for estimating causal effects in situations where no observation of the treatment variable is available. The authors introduce an innovative approach that utilizes interactions to approximate missing exposure or propensity data. The method relies on several key assumptions, including the significance of "popularity" as a strong confounder of both treatment and outcome variables, as well as the proposition that the outcome variable is a product of the unobserved propensity and relevance (which is a function of historical popularity). Additionally, the paper assumes a monotonic relationship between popularity and exposure (Assumption 1) and utilizes a parametric assumption based on the Beta distribution. The paper does a great job reviewing existing literature. And the simulation section provides a thorough comparison with baselines and oracles that have access to the treatment variable. The ablation study of its own method is also very comprehensive which provided intuition on the most important factors of the proposed model. Strengths: The proposed method is very novel by leveraging strong intuitions in the domain to find strong confounders that can reliably impute the exposure variable through a learnable objective. The paper presents a fresh new perspective into the causal inference application to Recommendation systems when privacy limits the access to treatment assignments in production applications. Weaknesses: N/A Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Question 1: PropCare is a solid method when all the assumptions are met: The method assumes `popularity` as a strong confounder of treatment and of Y. I wonder how general the assumptions are when the treatment changes or when applied to other datasets when the treatment does not have a single strong-enough confounder. Question 2: about line 291 (simulation section): Can authors share the distribution of the fitted propensity scores using all the compared methods? Wonder if POP's bad performance is due to a practical violation of the positivity assumption, plus not mitigating it through bounding the fitted value between e.g. [0.02, 0.98]. Since the PropCare method handles the positivity assumption violation in its regularization term, it should be a fair comparison to perform comparable mitigation for the other baselines. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for acknowledging our novelty and literature review. we will give answers to each question asked by this reviewer below. **Q1:** PropCare is a solid method when all the assumptions are met: The method assumes popularity as a strong confounder of treatment and of Y. I wonder how general the assumptions are when the treatment changes or when applied to other datasets when the treatment does not have a single strong-enough confounder. **A1:** Assuming the casual relationship between popularity and treatment is quite general in many previous works. For example, [a,b] explicitly state the confounder role of popularity in the causal graph, [c,d] also empirically investigate how popularity affects the item exposure or predicted ratings. Hence, we believe the assumptions are reasonably general in most of the situations. **Q2:** about line 291 (simulation section): Can authors share the distribution of the fitted propensity scores using all the compared methods? Wonder if POP's bad performance is due to a practical violation of the positivity assumption, plus not mitigating it through bounding the fitted value between e.g. [0.02, 0.98]. Since the PropCare method handles the positivity assumption violation in its regularization term, it should be a fair comparison to perform comparable mitigation for the other baselines. **A2:** We show the distribution of fitted propensity scores by baselines for DH\_original dataset in **Figure 1 of global rebuttal PDF**. From the results it can be seen that the long tailed distribution of POP where most of the values are close to 0 might be a reason for its bad performance, as the violation of positivity assumption may lead to large variance in the outputs of causal recommendation [e,f]. However, to avoid that, our backbone model DLCE [e] has already deployed a **capping threshold** $\chi$ to reduce variance of the prediction. Besides, in data generation steps of [e], the simulated propensity are **clipped** in the range of [$10^{-6}$, $1-10^{-6}$] as a mitigation. In our experiment, we follow the same clipping step after we obtained the estimated propensity of each baseline by default. Hence, *give these mitigation steps, it is a fair comparison*. Additionally, please note that we have also done an ablation study and parameter analysis of the regularization term in **Appendix D.1**, and we found that even without the regularizer (i.e., $\mu$=0), PropCare still outperformed POP, e.g., 0.87 vs. 0.79 in CDCG on DH_original. This implies that the better performance of PropCare is not just due to the regularizer. **References** [a] Zhang, Yang, et al. "Causal intervention for leveraging popularity bias in recommendation." SIGIR 2021. [b] Tianxin Wei, et al. "Model-Agnostic Counterfactual Reasoning for Eliminating Popularity Bias in Recommender System." KDD. 2021 [c] Zhongzhou Liu, et al. "Mitigating Popularity Bias for Users and Items with Fairness-centric Adaptive Recommendation." ACM Trans. Inf. Syst. 41, 3, Article 55. 2023 [d] Ziwei Zhu, et al. "Measuring and Mitigating Item Under-Recommendation Bias in Personalized Ranking Systems." SIGIR. 2020. [e] Masahiro Sato, et al. 2020. ”Unbiased Learning for the Causal Effect of Recommendation”. RecSys. 2020. [f] Teng Xiao and Suhang Wang."Towards Unbiased and Robust Causal Ranking for Recommender Systems". WSDM. 2022. --- Rebuttal Comment 1.1: Comment: I thank the authors for the response and doing additional experimentation to address my concerns. After reading the rebuttal and the comments from other reviewers, I increase my score from Weak Accept to Accept. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 5c9m, We sincerely thank you for your prompt and timely response to our rebuttal.
Summary: This paper proposes a propensity estimation model for causality-based recommendation without accessing the ground-truth propensity score or exposure data. Prior knowledge about item popularity is utilized to estimate the propensity score. A theoretical analysis is provided to understand the proposed model. Strengths: - This paper investigates an interesting problem in causality-based recommendation, propensity score estimation, and proposes a model to get rid of the requirements for ground-truth exposure data. - A theoretical analysis is provided to understand the critical factors of the proposed model. - Ablation study is conducted. Weaknesses: - I have serious concerns about the evaluation framework in this paper. All the recommendation experiments are based on one single DLCE model, and the adopted baselines for comparison are rather weak. I suggest the authors to include more advanced baselines, and more importantly, to compare with a wider range of backbone models. Just to list a few widely acknowledged causal recommendation approaches [1-3] which all do not require exposure data. - The technical contributions of the paper is limited. The proposed pairwise relationship between item popularity and propensity score is similar to the design in [1,2], which also leverage popularity to serve as a *soft* proxy for propensity score. - The adopted three datasets are very small, considering modern recommendation platforms. - In Table 3, the proposed PropCare model does not achieve the best performance with respect to Tau and F1 score, outperformed by both POP and CJBPR. - The case study is hard to follow and not convincing to me. [1] Bonner, Stephen, and Flavian Vasile. "Causal embeddings for recommendation." Proceedings of the 12th ACM conference on recommender systems. 2018. [2] Zheng, Yu, et al. "Disentangling user interest and conformity for recommendation with causal embedding." Proceedings of the Web Conference 2021. 2021. [3] Zhang, Yang, et al. "Causal intervention for leveraging popularity bias in recommendation." Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2021. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. The authors only experiment with one backbone model, namely DLCE. I suggest the authors to compare with more state-of-the-art causal recommendation approaches. 2. What are the main contributions of the proposed model? What are the differences between the proposed pairwise relationship and existing works such as [1] and [2]. 3. I suggest the authors to conduct experiments on large-scale datasets. 4. Please provide more explanations on the case study. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the valuable comments from this reviewer. we will give answers to each question asked by this reviewer below. **Q1:** Suggestion to compare with SOTA causal recommendation approaches. **A1:** Thanks for the suggestion. However, this review contains **factual errors** which we'd like to highlight to help reviewers better understand the goal of our paper and experiment settings. First, we clarify that our paper is **NOT** to propose a new causal recommendation approach. Instead, the proposed PropCare is a **propensity estimation** method for downstream causal recommendation, aiming to *bridge the gap in existing causality-based recommendation systems, where the propensity score and/or exposure data are often unavailable but required for model training or inference.* Second, we clarify that the term "causality-based recommendation systems" used in our paper refers to the models that recommend items with a higher causal effect (line 28-32). However, the last two papers suggested by the reviewer [b,c] are neither for propensity estimation nor for causality-based recommendation. In particular, [b] aims to tackle conformity bias by decomposing the observed ratings into factors of interest and conformity for click estimation. [c] aims to utilize desired popularity bias by adjusting them in inference for click estimation too. Only [a] is a causal recommendation model, which jointly learn two models on data with and without recommendations. Hence, exposure data are still needed for training. The reviewer said it does not require exposure data, which is wrong. **We further report the causal recommendation results of PropCare, using CausE [a] as an alternative backbone in Table 1 of the global rebuttal PDF.** It shows a similar pattern as using DLCE backbone in the main paper, i.e., our PropCare consistently outperforms other baselines even with different causality-based models as the backbone. Third, as a popular causality-based recommendation system, DLCE [d] (RecSys'20) is still considered a SOTA causal recommendation backbone, which is also agreed by Reviewer Vbya. **Q2:** Main contributions and comparisons with [a] and [b]. **A2:** For the first contribution, please refer to the first point in our A1 to this reviewer. Our second main contribution is to *incorporate prior knowledge for robust propensity estimation* by proposing assumption 1 to model the pairwise relationship on popularity and propensity. This contribution is acknowledged by Reviewer xH3J. Moreover, the novelty of our work is also acknowledged by Reviewer 5c9m. As we have explained in the second point in A1, paper [b] is totally unrelated to our work; paper [a] is a typical causality-based recommendation system which requires exposure status to be observed for training. Our work is proposed for the gap in models like [a] where such data is usually unavailable. **Q3:** Suggestion for large-scale datasets. **A3:** Thanks for the suggestion, but other large datasets are not available for evaluation. To evaluate our model we must have ground-truth propensity and causal-effect, which are usually unavailable. Currently, only DH\_original and DH\_personalized [d] and ML [e] provide such ground-truth, which are widely used in evaluation of causality-based recommenders [d,e,f]. Nevertheless, we still conduct efficiency experiments on larger datasets, see A3 to Reviewer KaLM. **Q4:** Explanations on the case study. **A4:** The case study aims to show how the propensity scores from different approaches affect the causal recommendations. In Table 4, we investigate the top-5 recommended items with each baseline and draw the following conclusions: (1) With ground-truth propensity score and exposure, most recommended items have a positive causal effect ($\tau=1$, denoted by \ddag). (2) Comparing the lists between CJBPR and PropCare, they both hit two purchased items ($y=1$, denoted with bolded text), but CJBPR hits only one item with positive causal effect and PropCare hits two. It means although they have equal performance in conventional recommendation, PropCare outperforms CJBPR in causal recommendation. (3) Though item popularity was used to estimate propensity in some previous works [g,h,i], they are not suitable in causal recommendation. **Weakness 3:** About Tau and F1 score in Table 3. **Response**: In Table 3, we use multiple metrics to measure the accuracy of propensity estimation from different aspects. Tau only measures similarity between two rankings by counting the number of concordant and discordant pairs, which is not enough to judge whether it is a good estimation or not. Hence, we also use KLD and F1. Our PropCare performs best in KLD and F1 in most cases. Even in ML, the F1 is only 0.03 lower than the best. Moreover, using propensity and exposure inferenced by PropCare, we can achieve the best on downstream causal recommendation, which shows the advantage of PropCare over other baselines. **References** [a] Bonner, Stephen et al. "Causal embeddings for recommendation." RecSys. 2018. [b] Zheng, Yu, et al. "Disentangling user interest and conformity for recommendation with causal embedding." WWW. 2021. [c] Zhang, Yang, et al. "Causal intervention for leveraging popularity bias in recommendation." SIGIR 2021. [d] Masahiro Sato, et al. 2020. "Unbiased Learning for the Causal Effect of Recommendation". RecSys. 2020. [e] Masahiro Sato, et al. "Causality-aware neighborhood methods for recommender systems". ECIR. 2021. [f] Teng Xiao and Suhang Wang."Towards Unbiased and Robust Causal Ranking for Recommender Systems". WSDM. 2022. [g] Yuta Saito. "Unbiased Pairwise Learning from Implicit Feedback". NeurIPS 2019 Workshop on Causal Machine Learning [h] Yuta Saito, et al. "Unbiased Recommender Learning from Missing-Not-AtRandom Implicit Feedback:. WSDM. 2020. [i] Longqi Yang, et al. "Unbiased offline recommender evaluation for missing-not-at-random implicit feedback". Recsys. 2018.
Summary: This paper proposes a framework for causality-based recommendation system. Different from traditional correlation-based recsys (e.g. collaborative filtering), causality-based recsys makes recommendations based on the causal "uplift". While there are several causal recsys models in the literature, they rely on exposure data and/or propensity scores to be given. In real-world scenarios, exposure data are often unavailable, difficult to obtain or noisy. Hence, in this paper, the authors proposed a propensity estimation framework, in the absence of exposure data which is a more practical setup, to ultimately allow causality-based recommendation. Experimental analysis are conducted with both quantitative results and a case study. Strengths: S1. The problem studied is of significant research and practical value. In particular, the setup without assuming any exposure data is practical and can be easily deployed on most recsys platforms. Overall, the motivation and challenges are well articulated and convincing. S2. The key assumption in 4.2 is well argued and presented. The empirical validation of the assumption convincingly support the assumption. S3. Experiments are comprehensive with evaluations on causal performance as well as the quality of estimated propensity and exposure. Detailed analysis of results reveal deeper insights, such as the factors influencing casual recommendation. Moreover, the case study is interesting and provide intuitive evidence to the benefit of a causal recsys. S4. Overall the paper is well executed with a well motivated and effective solution. Weaknesses: W1. Below the theoretical property in 4.5, the authors mentioned that the proposition guides certain design choices, such as regularization of the global distribution in Eq (7). While this is a theoretical implication of the proposition, is there any empirical evidence? The current ablation study does not seem to test the usefulness of the KL regularizer. W2. Data set differences: it is not clear to me the difference between DH_orig and DH_personalized. It was stated DH_personalized has a simulated "personalization factor". What is this factor? how does it work exactly? Minor comments: line 233: "should be avoid" -> avoided Table 1: the last 4 column names, are not clearly explained. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for acknowledging the contribution of our work and paper presentation. we will give answers to each question asked by this reviewer below. **Q1:** Below the theoretical property in 4.5, the authors mentioned that the proposition guides certain design choices, such as regularization of the global distribution in Eq (7). While this is a theoretical implication of the proposition, is there any empirical evidence? The current ablation study does not seem to test the usefulness of the KL regularizer. **A1:** Sure, we have already provided additional ablation study and parameter test for KL regularizer in **Appendix D.1**, due to page limitation. We found that when $\mu =0$ (no regularization at all), the causal performance is significantly compromised. And the performance reaches the peak when $\mu$ is in the range of [0.2, 0.8], showing the advantage of regularizing the propensity with beta distribution. **Q2:** Data set differences: it is not clear to me the difference between DH\_orig and DH\_personalized. It was stated DH\_personalized has a simulated "personalization factor". What is this factor? how does it work exactly? **A2:** The difference in DH\_original and DH\_personalized is how they simulate the propensity. For DH\_original, the propensity of an item is basically computed as the ratio between how many weeks the item is exposed to the user and how many weeks the user visits the retailer. For DH\_personalized, the items are first ranked by the computed user's interaction probabilities, then the propensity is estimated by $P_{u,i}=\min \left(1, a\left(1 / \operatorname{rank}\right)^b\right),$ where $\operatorname{rank}$ is the item's rank for that user $u$, $a=100$, $b=1$ are hyper-parameters. In the original paper, we refer ``personalization factor'' to the term $a(1 / \operatorname{rank})^b$. More detailed simulation steps can be found in [a]. **Minor comments:** line 233: "should be avoid" - avoided Table 1: the last 4 column names, are not clearly explained. **Response to minor comments:** Thanks for pointing out these. We will revise the typo and add clear explanations for the last 4 columns of Table 1 in the latest manuscript. **reference** [a] Masahiro Sato, et al. 2020. ”Unbiased Learning for the Causal Effect of Recommendation”. RecSys. 2020.
Summary: The authors propose a propensity estimation/learning method based for unbiased recommendations. The method assumes no external data and only uses the user interaction data for learning. The main idea is well explained in Assumption 1 in the paper, which states that for two items with similar click/interaction probability, the more popular item is likely to be recommended to the user over the other. Authors incorporate this assumption in Eq. 6. The propensities learned via the proposed method perform better as compared to the baselines on several benchmark datasets. Strengths: - The paper is very well-written and very easy to follow. The math is also intuitive to understand, overall a good job by the authors in writing. - The proposed method (Eq 6) follows intuitively from the main assumption (Assumption 1). - Experiments are performed on multiple benchmark datasets, and the proposed method outperforms the baselines. - Authors use the state-of-the-art causal recommendation method DLCE. Weaknesses: - The choice of the KL-divergence-based regularization is not super clear. The GNN paper citation (11) does not use beta distribution as a regularizer, but rather as a weight in the GNN aggregation. Also since $Q$ is the empirical distribution of the estimated propensity scores, how is the regularizer used in the training (could be explained via the gradient equation)? - It is not clear how the ground-truth propensities in the ML dataset are used. In the appendix authors very briefly touch upon that (lines 58, 59), but it's not clear how that relates to the ground-truth propensities. MovieLens has explicit ratings, which users self-select, and the bias in the original dataset reflects the user's self-selection bias, not the recommendation bias. Saito et al [1] transform the ratings into binary feedback to model the bias used in the current paper. - The main metrics used for the evaluation (CP, CDCG) are not explained in the main section, but rather in the appendix. Since they are not very well-known metrics, it would be better if the authors include them in the main experimental section. Reference: - [1] Saito, Yuta, et al. "Unbiased recommender learning from missing-not-at-random implicit feedback." Proceedings of the 13th International Conference on Web Search and Data Mining. 2020. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What is the effect of the regularization term in propensity training? An ablation experiment could help, additionally, an experiment with varying $\mu$ values could also give some insight into this. - If authors could elaborate more on the ML dataset setup (ground-truth propensity, binary feedback from ratings, etc)? (see point 2 in the weakness section) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Limitations are correctly addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for acknowledging the strengths of our paper. We will give answers to each question asked by this reviewer below. **Q1:** What is the effect of the regularization term in propensity training? An ablation experiment could help, additionally, an experiment with varying $\mu$ values could also give some insight into this. **A1:** The intuition of regularization term is to control the distribution of learned propensity to prevent them from clustering near extreme values like 0 and 1. In other words, to **avoid the violation of positivity assumption,** as pointed out by Reviewer 5c9m. Although the citation (cited as [11] in the main paper) does not directly use Beta distribution as a regularizer, it still uses it to control the distribution of weights. As that paper finds Beta distribution is good at modelling popularity characteristics, we chose it to model the propensity distribution which is correlated to popularity in our model. Besides, we have conducted experiments to analyze $\mu$ for the regularization term in **Appendix D.1**, due to page limitation. We found that when $\mu =0$ (no regularization at all), the causal performance is significantly compromised. And the performance reaches the peak when $\mu$ is in the range of [0.2, 0.8], showing the advantage of regularizing the propensity with beta distribution. **Q2:** If authors could elaborate more on the ML dataset setup? **A2:** Sure, let us explain it with more details. The authors of [a] also described how they pre-processed ML dataset. Given an user-item pair, first, they predicted the rating $\hat{R}$ and probabilities of observing the rating $\hat{O}$ using matrix factorization methods. Then, they estimate the interaction probability with recommendation ($\mu^{\mathrm{T}}$) and without recommendation ($\mu^{\mathrm{C}}$) using $\mu^{\mathrm{T}}=\sigma\left(\hat{R}-\epsilon\right), \quad \mu^{\mathrm{C}}=\hat{O},$ where $\sigma$ is sigmoid function and $\epsilon$ is hyper-parameter set to 5. Next, the propensity for this user-item pair is estimated by $P=\min \left(1, a\left(1 / \operatorname{rank}\right)^b\right),$ where $\operatorname{rank}$ is the rank of items for the user according to $\mu^{\mathrm{T}}+\mu^{\mathrm{C}}$. $a$ and $b$ are hyper-parameters which are set to 100 and 1 separately. The potential outcome with and without recommendations and exposure status are sampled from their corresponding probabilities. $Y^{\mathrm{T}} \sim \operatorname{Bernoulli}\left(\mu^{\mathrm{T}}\right), \quad Y^{\mathrm{C}} \sim \operatorname{Bernoulli}\left(\mu^{\mathrm{C}}\right), \quad Z \sim \operatorname{Bernoulli}\left(P\right) .$ Finally, the causal effect and interaction are obtained as $\tau=Y^{\mathrm{T}}-Y^{\mathrm{C}}, \quad Y=Z Y^{\mathrm{T}}+\left(1-Z\right) Y^{\mathrm{C}}.$ Note that in our experiment we use the obtained interaction $Y$ as ground-truth for training PropCare and causal effect $\tau$, propensity $P$, exposure status $Z$ as ground-truth for evaluation. **reference** [a] Masahiro Sato, et al. "Causality-aware neighborhood methods for recommender systems." In Advances in Information Retrieval: 43rd European Conference on IR Research, pages 603–618. Springer, 2021. --- Rebuttal Comment 1.1: Title: Response Comment: Apologies for the late reply. Thanks for your clarification and response, this is helpful. --- Reply to Comment 1.1.1: Title: Thanks Comment: Dear Reviewer Vbya, No worries. Thank you for letting us know that we helped.
Rebuttal 1: Rebuttal: We express our sincere gratitude to all the reviewers for their valuable feedback and insightful comments on our paper. We are humbled by the positive reception and are encouraged by the recognition of the efforts we put into this research. We acknowledge the time and expertise each reviewer has invested in evaluating our work. Especially, we want to thank Reviewer KaLM for acknowledging our **presentation, soundness and experiments.** we thank Reviewer Vbya for also acknowledge our **presentation, experiments and intuition.** We thank Reviewer xH3J for praising the **practical value, presentation and experiments.** We thank Reviewer Pdiw for agreeing the **proposed question, theoretical analysis and ablation study.** Finally, we thank Reviewer 5c9m for strongly appreciating the **novelty and literature review** of our work. Please kindly find our responses to individual reviewers in the corresponding rebuttals, and some of the figures in the rebuttals can be found in the attached PDF below. Sincerely, The authors of "Estimating Propensity for Causality-based Recommendation without Exposure Data" Pdf: /pdf/728b90d52a6632d75ea8975e10566ddc00e736ef.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper focuses on causality-based recommendation system, by proposing a PROPCARE method that estimates the propensity score by using its correlation with popularity. The motivation is well stated and related work is well discussed. Through experiments, the proposed method outperforms the baselines. Strengths: 1. The paper is well written and easy to follow. The motivation is clearly stated and the authors did a good survey around related work. 2. The proposed method is sound, with some theoretical analysis. 3. Experiments are conducted carefully with case studies to demonstrate the effectiveness of the proposed method. Weaknesses: 1. In order to model causality, the authors should make clear statement about whether the relationship between popularity and propensity is correlation or causality, which significantly affects the modeling foundation of this work. 2. Technical foundation is limited. From ML perspective, the technical contribution in this work is to combine point-wise and pair-wise modeling by combining the popularity with propensity estimation, which are all well-developed methods. 3. The baselines used in this method are not state-of-the-art, which lacks of confidence to evaluate the contribution of this work. Causality-based recommendation should share the same goal of general recommender systems, which should include the state-of-the-art methods to compare the recommendation accuracy. 4. The datasets used in this work is quite small, it's worthy to check the performance and scalability of the proposed method in large-scale datasets. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How should the author distinguish the relationship between popularity and propensity as correlation or causality? 2. What's the recommendation accuracy comparison between PROPCARE and state-of-the-art recommendation models? 3. How is the model's performance and scalability to large-scale datasets? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for acknowledging our motivation, writing and experiment. we will give answers to each question asked by this reviewer below. **Q1:** Distinguish popularity and propensity as correlation or causality. **A1:** Thanks for the question. To model propensity (the probability of exposure), we first identify that item popularity directly affects the exposure status of the item, which has been also established by previous works [a,b]. Based on that fact, we propose Assumption 1 to estimate propensity using popularity as a proxy. Though the propensity is highly correlated to the popularity, it is hard to say that propensity and popularity have a direct causal relationship. First, according to interaction model stated in Eq.(2), the propensity score is also affected by interaction and relevance probabilities. Second, there might be other confounder such as item stocks that affects both propensity and popularity. Therefore, **the relationship between popularity and propensity is generally correlation, but not always be causality.** In our model (especially Assumption 1), we require the propensity and popularity to have at least correlation relationship. While two variables in a causality relationship are also correlated, it will not affect our modelling foundation whether it is causality or correlation. **Q2:** About the recommendation accuracy comparison between PROPCARE and state-of-the-art recommendation models. **Weakness 3:** About the baselines used in this method are not state-of-the-art. And about the same goals between causal recommendation and conventional recommendation systems. **A2 and response to weakness 3:** We have already compared the causal performance of traditional recommendation models (including **MF**, **BPR** and **LightGCN**) with PropCare (using DLCE as the causal backbone) in **Appendix D.2**, due to page limit. Here we report the comparison of traditional recommendation accuracy with LightGCN, a state-of-the-art conventional recommender below. LightGCN: | | Prec@10 | Prec@100 | DCG | |-----------------|---------|----------|-------| | DH_original | .1029 | .0475 | 2.475 | | DH_personalized | .0737 | .0390 | 2.505 | | ML | .3220 | .2151 | 14.78 | PropCare: | | Prec@10 | Prec@100 | DCG | |-----------------|---------|----------|-------| | DH_original | .1967 | .0764 | 3.097 | | DH_personalized | .2680 | .0939 | 3.615 | | ML | .6168 | .4109 | 18.21 | From the results it can be found that **our model consistently outperforms LightGCN in terms of accuracy metrics.** It is because by definition, items with high causal effect will likely be interacted when they are recommended. In contrast, items with high interaction probabilities do not necessarily have high causal effect. In other word, causality-based recommender has a similar but more focused optimization objective than conventional recommenders like LightGCN. The results also support your comment ``Causality-based recommendation should share the same goal of general recommender systems''. We clarify that as an emerging topic, there are not many approaches for estimating propensity or exposure without additional information. Previously, one popular way is to directly use item popularity (named POP in our experiment as a baseline) to simulate propensity. Besides, we also compare our method with EM and CJBPR, which are both published in recent premier conferences. There are some other works for propensity/exposure estimation, but as we have stated in the related works in the original paper, they assume observable exposure data as training labels [f,g]. For the backbone model, DLCE is also a SOTA open-sourced causal recommendation model. **Q3:** Model's performance and scalability to large-scale datasets. **A3:** It is a great suggestion to test our model on larger dataset. However, there is no other publicly available datasets for evaluation of propensity estimation or downstream causal recommendation. It is because to evaluate them we must have ground-truth propensity and causal-effect, which are usually unavailable due to privacy or technical constraints. Currently, only DH\_original and DH\_personalized provided by [c] and ML provided by [d] provide such ground-truth information. Those three datasets are widely used benchmarks in the evaluation of causality-based recommendation systems, like in [c,d,e]. Nevertheless, to show the scalability of our method, we train our model on 3 popular larger-scale datasets, namely MovieLens-1M, MovieLens-10M and MovieLens-20M. We report the overhead incurred by training PropCare for propensity estimation against DLCE, the backbone causal recommender below. | Training time (hours) | 1M | 10M | 20M | |----------|--------|--------|--------| | PropCare | .0814 | 1.494 | 3.170 | | DLCE | 2.1759 | 22.658 | 40.658 | Note that due to the missing of ground-truth propensity/exposure and causal-effect, we are unable to evaluate the model performance. Yet, **the results show our PropCare is scalable to larger datasets and incurs only a marginal overhead on top of the backbone DLCE.** **References** [a] Zhang, Yang, et al. "Causal intervention for leveraging popularity bias in recommendation." SIGIR. 2021. [b] Tianxin Wei, et al. "Model-Agnostic Counterfactual Reasoning for Eliminating Popularity Bias in Recommender System". KDD. 2021. [c] Masahiro Sato, et al. 2020. "Unbiased Learning for the Causal Effect of Recommendation". RecSys. 2020. [d] Masahiro Sato, et al. "Causality-aware neighborhood methods for recommender systems". ECIR. 2021. [e] Teng Xiao, et al."Towards Unbiased and Robust Causal Ranking for Recommender Systems". WSDM. 2022. [f] Dawen Liang, et al. "Modeling user exposure in recommendation". WWW. 2016. [g] Masahiro Sato, et al. "Uplift-based evaluation and optimization of recommenders". RecSys, 2019 --- Rebuttal Comment 1.1: Comment: The reviewer would like to first thank the detailed responses from authors. Since the authors claim "While two variables in a causality relationship are also correlated, it will not affect our modelling foundation whether it is causality or correlation", the motivation and novelty of this work about causality-based recommendation is a bit misleading, the reviewer would suggest the authors to provide clear definition and empirical evidence about the causality if they would like to insist on causality-based modeling, e.g., treatment effect from purely randomized datasets, etc. Given these, the reviewer still has concerns and would like to stick with current rating. --- Reply to Comment 1.1.1: Title: Further response and clarification Comment: Dear Reviewer KaLM Thanks for your response. It seems there is some misunderstandings on the direction and contribution of our work. 1. As we have emphasized in the main paper and rebuttal to other reviewers, our main contribution is to **estimate propensity score**, which bridges the gap in existing causality-based recommendation systems, where the propensity score and/or exposure data are often unavailable but required for model training or inference. Though the downstream task is causal recommendation, please note that **we do not propose a causal recommendation model.** 2. We would like to clarify that our work of propensity estimation **is not** building upon causality modelling. As explicitly stated in the main paper, the basis of our Assumption 1 is the intuition that, when a user’s interaction probabilities are similar toward two items i and j, but item i is more likely to be exposed to the user, the reason could be item i is more popular than j [a,b]. In accordance with this intuition, **causality between the two variables is not a requirement behind Assumption 1.** This is why we responded that "it will not affect our modelling foundation whether it is causality or correlation." In conclusion, *our research question and solution focus on the propensity estimation, instead of proposing a causal recommendation approach.* **References** [a] Zhang, Yang, et al. "Causal intervention for leveraging popularity bias in recommendation." SIGIR. 2021. [b] Tianxin Wei, et al. "Model-Agnostic Counterfactual Reasoning for Eliminating Popularity Bias in Recommender System". KDD. 2021.
null
null
null
null
null
null
What is the Inductive Bias of Flatness Regularization? A Study of Deep Matrix Factorization Models
Accept (poster)
Summary: The paper studies the minimizers of the loss surface of deep matrix factorization that have a minimal trace of the Hessian. The trace of the Hessian is a measure of flatness of the minimum, that is favored by SGD. The authors show that for matrix sensing with observations that satisfy RIP (which is in particular true for enough Gaussian observations) the parameters that are the flatest have end-to-end matrix that approximately minimize the nuclear norm amongst fitting matrices. This in turn implies generalization guarantees that are significantly faster than the if one were to minimize the Frobenius norm instead of the nuclear norm. Strengths: The description of the flatness bias given by this paper is to my knowledge new, and some of the results are a bit unexpected. The writing is clear and thorough and the proofs are easy to understand. Weaknesses: The results hangs on the fact that the trace of the Hessian is the right notion of flatness for SGD, which is to my knowledge not decided yet. For GD the relevant measure of flatness is instead the largest eigenvalue of the Hessian, which will probably lead to a quite different implicit bias. And since SGD can in some cases be quite similar to GD, it seems unlikely that the bias would simply be described by the trace of Hessian, instead this exact bias could change depending on the learning rate and batch size. The lack of empirical experiments to test whether the bias described here is observed for SGD, label noise SGD or 1-SAM would help motivate this assumption. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: A shallow linear network with L2 regularization is also known to recover the minimal nuclear norm solution [Dai et al., Representation Costs of Linear Neural Networks: Analysis and Design]. Do you think that there is any advantage to rely on the flatness bias instead of the L2 regularization? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The biggest limitation is clearly the RIP condition, but it is well discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **implicit bias**: We would like to point out that in the paper [28] authors mathematically show that in the limit of step size going to zero, label noise SGD evolves according to a gradient flow according to the trace of hessian of the loss. The same fact can be seen for 1-SAM, but the reviewer is right that it might not be the right notion for vanilla SGD. **Experiments**: In the final version, we will include additional experiments that show how the trace of Hessian evolves for SGD with and without label noise, see also the pdf included with this rebuttal. Indeed, we do observe that label noise SGD decreases the trace of the Hessian. --- Rebuttal Comment 1.1: Comment: Thank you for the answer and additional experiments.
Summary: In the context of matrix sensing with deep matrix factorizations, the paper analyzes the inductive bias of interpolators with minimal Hessian trace, which is a well-known measure of sharpness. Specifically, under a Restricted Isometry Property (RIP) assumption on the linear measurements, it establishes that the minimal Hessian trace with which the deep matrix factorization can express an interpolator is approximately equal to the latter’s nuclear norm. In turn, this yields guarantees on the recovery of the ground truth matrix, as well as a separation result from the recovery obtained by the minimal Frobenius norm interpolator. As an additional contribution, develops a closed form expression for the minimal Hessian trace of an interpolator when there is only a single linear measurement. Strengths: 1. Well-written and easy to follow. The motivation, problem at hand, and main results are clearly described. 2. Helpful discussions are provided for each of the technical results, in particular regarding their implications and relation to existing work. 3. The technical contributions establish a new connection between flatness, in terms of minimal Hessian trace, and generalization for deep matrix factorizations (previous work of Ding et al. 2022 studies only depth two factorizations). Despite conventional wisdom and empirical evidence suggesting that flatness may lead to good generalization, formally showing that this is the case has proven challenging. This attests to the significance of the current paper’s results. Personally, I found it interesting that depth does not help in the analyzed setting, in the sense that minimizing the Hessian trace amounts to approximately minimizing the nuclear norm, similar to the depth L = 2 case. This may suggest that deep matrix factorization with RIP measurements is an unsatisfactory setting for uncovering the benefits of depth in deep learning and its (possible) relation with flatness. Weaknesses: The points below pertain to the scope of the technical contributions, suggesting places for improvement. I do not believe that points (2) and (3) significantly harm the quality of the current paper and are perhaps considerations for future work. For (1), it seems that it may be straightforward to incorporate, in which case I believe it can increase the generality of the current work. 1. Only interpolators with exactly minimal Hessian trace are considered. Since in practice a more realistic view is that it is only approximately minimized, if possible, it is worth (even if in an appendix) extending the generalization results to interpolators whose Hessian trace is approximately minimal. 2. The current paper characterizes the regularizer induced by minimizing the Hessian trace only for interpolators. Since using explicit regularization can lead to solutions that don't entirely minimize the loss to zero, I believe it is important to understand the inductive bias of minimizing the Hessian trace for non-interpolating linear mappings as well. While more complex, it may be more realistic. 3. The current paper only treats the question of what interpolating the data with minimal Hessian trace implies, and not the complementary question of whether it, or other measures of sharpness more generally, are implicitly minimized in deep matrix factorization by standard optimizers. Additional (minor) comments and typos: - Typo in line 75: “regularzier” should be “regularizer”. - Typo in line 101: I believe “observe” should have been “observed”. - Typo in equation after line 166: I believe there is a missing gradient symbol in the middle expression and a factor of two in both the middle and right expressions. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Have you considered other notions of sharpness besides the trace of the Hessian, e.g. its the maximal eigenvalue? In particular, can minimizing the maximal eigenvalue of the Hessian ensure generalization in the considered setting, or is it too weak and one must look at the trace of the Hessian? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors have adequately addressed limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. **If possible, it is worth (even if in an appendix) extending the generalization results to interpolators whose Hessian trace is approximately minimal.** **Reply**: We will include this generalization in the appendix in the final version of the work. 2. **Since using explicit regularization can lead to solutions that don't entirely minimize the loss to zero, I believe it is important to understand the inductive bias of minimizing the Hessian trace for non-interpolating linear mappings as well. While more complex, it may be more realistic.** **Reply**: Note that for points where the loss is non-zero, the form of trace of hessian of the loss is not given by Lemma 2 and Equation (7). Therefore, our approach in approximating the minimizers of these terms given a fixed end-to-end matrix and recovering the nuclear norm of the end to end matrix is not valid anymore; namely, this approximation will further depend on the distance to the manifold of zero loss. While beyond the scope of this work, our guess is that given some measure of closeness to the manifold (i.e. close to satisfying the linear constraints), one should still be able to use the same approach to approximate the trace of hessian regularizer. 3. **The current paper only treats the question of what interpolating the data with minimal hessian trace implies, and not the complementary question of whether it, or other measures of sharpness more generally, are implicitly minimized in deep matrix factorization by standard optimizers.** **Reply**: The problem of convergence of label noise SGD to the minimizer of trace of hessian for deep linear models is open and can be an interesting future direction. **Typos**: Thanks for pointing them out, we will fix them in the final version. ## Question: **Have you considered other notions of sharpness besides the trace of the hessian, e.g. its maximal eigenvalue?** **Reply**: The maximum eigenvalue regularizer can be obtained via different algorithms and a possible analysis for deep matrix factorization with maximum eigenvalue does not seem to be relevant to the analysis of the trace of hessian we are presenting, but is certainly an interesting future direction. Empirically, label noise SGD is observed to decrease the trace of hessian; this behavior is also proved mathematically in the limit of step size going to zero, in [28]. --- Rebuttal Comment 1.1: Comment: Thank you for the response, I read it and the other reviews carefully. Accordingly, I would like to keep my initial positive assessment of the paper.
Summary: The manuscript seeks to understand Hessian trace regularization in the case of deep linear network training with the mean squared error of linear measurements. It obtains a description of the effective regulariser which can be approximated and made more explicit in some cases. The manuscript obtains results on matrix recovery and generalization. Strengths: * The problem under investigation, to characterize solutions with minimum Hessian trace, is interesting. * The work obtains results indicating that the number of measurements needed to obtain a good approximation of a target matrix is smaller when using trace regularization as opposed to l2 regularization of the end to end matrix. Weaknesses: * The presentation of the results does not make the assumptions sufficiently clear in a timely manner. * The settings are restrictive. The networks are assumed to have no bottlenecks, so that the set of representable end to end matrices have no constraints. The considered objective function is a sum of squared errors of linear measurements, whereby it is assumed that the measurements satisfy an RIP property, or that the network has only two layers, or that there is only one measurement. * The theoretical results could have been strengthened by numerical experiments, particularly given the restricted conditions for which the theoretical results are obtained. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * In Theorem 1 it is indicated that the data should satisfy an RIP assumption, but this condition is not described explicitly. The (1,\delta)-RIP condition should be described explicitly before or at the time of stating the theorem. * After Theorem 1 it is stated that in more general cases it is challenging to compute F, but it is not clearly stated what is meant by more general cases. * In Theorem 2, what is the dependence on n? * In Theorem 3, what is r? This seems to be introduced 3 pages later in Definition 3. * Theorem 4 appears to consider regularization of the end to end matrix. What would be the result if one instead minimises the l2 norm of the parameters (factor matrices)? * There are typos in the display equation following line 166. * In Theorem 5 the argument of the Hessian of the Loss should be the factor matrices? * Example 1, missing negative sign in exponent? * Please add an explanation for the second inequality in (13). * In proof of Theorem 1, argument is W_1,\ldots, W_L instead of M? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: * The work considers deep linear networks with no bottlenecks, in which case the image function space contains all matrices. * It would be interesting to combine and compare the discussion of trace of Hessian minimisation with l2 regularization of the parameters. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. **The presentation of the results does not make the assumptions sufficiently clear in a timely manner.** **Reply**: We would like to point out that the RIP and width assumptions are mentioned in pages 2 and 3, but we are happy to follow the reviewers’ suggestions on stating them earlier. 2. **The settings are restrictive.** **Reply**: In the general case of multiple measurements, that we analyze here, the trace of hessian regularizer does not have a closed form, so it is very challenging to analyze (e.g., prior works make strong assumptions such as 2 layers). Yet, we manage to approximate it with a function of the nuclear norm under RIP, which is highly non-trivial. Of course, it is an interesting future direction to understand the behavior of the implicit bias in other remaining settings, e.g. when the distribution of the matrices are heavy-tailed. 3. **The theoretical results could have been strengthened by numerical experiments**: **Reply**: Some preliminary experiments are included in Appendix F. We will include more detailed experiments in the final version, see the plots in the included pdf. ## Questions: 1. **The (1,$\delta$)-RIP condition should be described explicitly before or at the time of stating the theorem.** **Reply**: We will define RIP before the main result to make it clearer for the reader. 2. **After Theorem 1 it is stated that in more general cases it is challenging to compute F, but it is not clearly stated what is meant by more general cases.** **Reply**: By more general cases, we mean beyond the one-measurement case and the RIP condition for multiple-measurements. As we mentioned, the case of multiple measurements for depth more than two (unlike the depth two case) is not solvable in closed form. We will clarify this in the final version. 3. **In Theorem 2, what is the dependence on n?** **Reply**: Note that in general in the definition of RIP (Definition 3) $\delta$ implicitly depends on $n$ and this dependency can vary for different distributions. But for example, for the Gaussian case we have $\delta(n) = \sqrt{\frac{d_L + d_0}{n}}$ as stated right after Theorem 2. 4. **In Theorem 3, what is r? This seems to be introduced 3 pages later in Definition 3.** **Reply**: $r$ should be set to one here, thanks for pointing to this typo. 5. **Theorem 4 appears to consider regularization of the end to end matrix. What would be the result if one instead minimizes the l2 norm of the parameters (factor matrices)?** **Reply**: While in this work we are interested to see the effect of trace of hessian regularization as a function of the end to end matrix, the problem of regularizing the factor matrices, e.g., with l2 as the reviewer mentioned, is indeed interesting on its own. We conjecture the induced regularizer of $\ell_2$ norm minimization is the Schatten-$2/L$- (semi)norm of the end-to-end matrix, though we don’t have a proof for depth $L$ larger than $3$. 6. **There are typos in the display equation following line 166.** **Reply**: Thanks for pointing this out, we will add the missing gradient to this equation. 7. **In Theorem 5 the argument of the Hessian of the Loss should be the factor matrices?** **Reply**:We see $\min\{W_1W_2=M\}tr[\nabla^2 \mathcal L]$ in total as a function of $M$, which is why we put its argument to be $M$. 8. **Example 1, missing negative sign in exponent?** **Reply**: Thanks for pointing this out, we will fix it in the final version. 9. **Please add an explanation for the second inequality in (13).** **Reply**: That is a rearrangement of the terms and indeed it is an equality, we will clarify it further in the final version. 10. **In proof of Theorem 1, argument is $W_1,\ldots, W_L$ instead of $M$?** **Reply**: Same response as case 7 above. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I appreciate the authors response to my initial review, particularly about organisation of definitions, a few derivations, and typos. My general assessment of the manuscript remains unchanged, leaning accept.
Summary: This work considers the implicit regularization of deep neural networks with linear activations and linear data. While previous works have shown that stochasticity in optimizers has the effect of smoothing the loss function, this work derives how this less sharp loss function can result in better generalization performance under certain assumptions. Specifically, the authors derive the induced regularizer in three cases: (1) two-layer linear networks, (2) networks learned with a single example, and (3) data that satisfies the restricted isometry property. Then, in these settings, generalization bounds are derived. Strengths: * I found this work to be very well presented and clear. In general, I found the presentation made the content quite accessible for a reader not familiar with the literature. I appreciate the author's effort to ensure the content was as self-contained as possible with all required background knowledge neatly included. * I found Sec. 1.1 summarizing the main results of the work to be extremely helpful before proceeding into the details later. * Notation was well-considered, clear, and consistent. * The related work section was extensive between the main text and the appendix. Although I am not familiar with this literature, this section provided a helpful summary (although I cannot comment on its accuracy). * The claims made in the abstract are an accurate reflection of the paper's contributions. * Some synthetic numerical experiments are provided to validate the theoretical findings. Weaknesses: * My single major issue with this work is with respect to its practical significance. Almost the entire contribution of this work lies in the theoretical results derived within. However, all of these results are derived under extremely restrictive assumptions/conditions. My understanding is that they assume (at a minimum): - A neural network with linear activations. - The regression setting (I believe results would not hold under e.g. cross-entropy loss?) - The relationship between the observations and their ground truth targets is linear. - No hidden layer within the network has fewer hidden neurons than either the input or output layer. This would rule out any problem with a single label per observation. Then, the results are derived within three even more restricted settings: (1) the network has exactly two layers (2) the network is learned upon a dataset consisting of a single example or (3) the data satisfies the RIP property. Some of the later theorems require even more assumptions such as $iid$ observations satisfying $\mathbb{E}_A \braket{A,M}^2 = ||M||^2_F$. I believe it is uncontroversial to say these results cannot be applied to any practical task. Therefore, I would challenge the authors to make the case as to how this work contributes itself or could have some downstream contribution to real-world problems/tasks. While this conference is a suitable venue for a theory-style paper such as this, I would argue that the choice of whether to accept such a paper should be heavily weighted by its potential practical impact (otherwise we can always invent interesting, but purely fictitious problems). Minor issues/clarifications: * While I realize that the experiment was straightforward, it is always nice to include the code used to generate the results. Could this be added to a camera-ready version? * On L22+23 the paper states that "despite the overwhelming empirical evidence on ... the effectiveness of using sharpness regularization on improving generalization [15, 47, 51, 37], the connection between penalization of the sharpness of training loss and better generalization still remains unclear." Could the authors clarify how these are not the same thing? I suspect the point is clear to the authors but could just be worded more clearly. * On L169: is this intended to refer to eqn (12)? * Should the experiments in Fig 2 not also include a baseline of a model trained without label noise? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: * I found the terminology of "end-to-end parameters" (e.g. L52+53) to be slightly non-intuitive for its intended meaning. Possibly because in other parts of the machine learning literature this terminology refers to a different concept (models that are trained jointly in a non-modular way). Is this standard terminology in this literature? If not, it might be helpful to use an alternative or make this more clear. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: Yes. I cannot foresee any potential negative societal impact of this work Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **"While this conference is a suitable venue for a theory-style paper such as this, I would argue that the choice of whether to accept such a paper should be heavily weighted by its potential practical impact (otherwise we can always invent interesting, but purely fictitious problems)"** **Reply**: Deep matrix factorization is a practically significant setting which was not invented in this work; it is established by previous Neurips papers, e.g., [Implicit regularization in matrix factorization, Gunasekar et al., NIPS 2017](https://proceedings.neurips.cc/paper_files/paper/2017/file/58191d2a914c6dae66371c9dcdc91b41-Paper.pdf), [Implicit regularization in deep matrix factorization, Arora et al., NIPS 2019](https://proceedings.neurips.cc/paper_files/paper/2020/file/f21e255f89e0f258accbe4e984eef486-Paper.pdf). **"The activations are linear."** **Reply**: As we pointed out, understanding the trace of Hessian regularizer for deep linear models is highly non-trivial as the regularizer is not computable in closed-form for depth beyond two. Therefore, similar to some of the previous impactful papers in the theory of deep learning, here we also focus on the first major difficult problem which is dealing with deep linear models. **"No hidden layer within the network has fewer hidden neurons than either the input or output layer. This would rule out any problem with a single label per observation."** **Reply**: This is not the case, for example if one chooses $A_i$ to be in the rank-one form $w^\top x_i$, for fixed weight vector $w$ and input vector $x_i$, then the single output prediction problem is recovered. The purpose of our work is to illustrate the surprising almost equivalence of the trace of hessian regularizer with the nuclear norm of the end-to-end matrix in deep matrix factorization, and the RIP condition is a natural and fairly general assumption on the data which allows the analysis to go through in a clean way without unnecessary lengthy derivations (indeed one can probably regenerate this result using some other concentration techniques but its beyond the purpose of this work.) ## Minor Issues and Questions: **It is always nice to include the code used to generate the results. Could this be added to a camera-ready version?** **Reply**: Yes we will include the code in the camera-ready version. **On L22+23 the paper states that "despite the overwhelming empirical evidence on ... the effectiveness of using sharpness regularization on improving generalization [15, 47, 51, 37], the connection between penalization of the sharpness of training loss and better generalization still remains unclear." Could the authors clarify how these are not the same thing?** **Reply**: Penalizing the sharpness of the training loss is an implicit bias mechanism, but it is not clear how this bias will impact the generalization behavior for general networks. **On L169: is this intended to refer to eqn (12)?** **Reply**: Yes, but since it is a forward reference it might be clearer if we remove it, so we will rephrase this in the final version. **Should the experiments in Fig 2 not also include a baseline of a model trained without label noise?** **Reply**: We will add the no label noise case to the final version, and also more settings. See the plots in the pdf included with this rebuttal. **I found the terminology of "end-to-end parameters" (e.g. L52+53) to be slightly non-intuitive for its intended meaning.** **Reply**: We thank the reviewer for mentioning this delicate point, we will change this phrase to end-to-end matrix in the final version. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. While I appreciate the references provided, I don't think this addresses the point I was making. Let me try to be more succinct. The results derived in this (and other similar) work(s) are under highly restrictive settings (linear data, linear activations, etc.) that don't represent the setting in which we are interested in better understanding the inductive biases of neural networks (e.g. typical architectures used in practice). Therefore, in works such as these, a vital consideration should be to what extent the results transfer over. Admittedly this might be something we can only investigate empirically but without doing so it is impossible for us to know if the intuitions one might obtain from theoretical work such as this should guide our intuitions in practical settings. Without doing so I would argue that this work is incomplete. I think this would be an obvious expectation on any empirical work in such a restrictive setting, therefore I think we should maintain the same standard for theoretical works. While I appreciate there is precedent for theoretically investigating restrictive settings, that does not in itself imply that we should continue deepening our research into these settings without consideration of the value of such results. --- Reply to Comment 1.1.1: Comment: Thank you for your response. Regarding the reviewer's response, we would like to briefly remind the following points: * Theoretical study is important in the long run. * Theoretical studies have to start with restricted cases even though there is a gap with practical world; many successful theoretical research started with linearized models; without understanding easier case, there is no chance to understand practical, complex cases.
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their effort to provide helpful comments regarding our work. In the following, we have replied to their comments separately. Pdf: /pdf/0182e22dd091fd4f28069b199850bc1e9b2f4583.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Mode Connectivity in Auction Design
Accept (poster)
Summary: The paper seeks to provide a theoretical foundation for empirical results demonstrating that optimal auctions (both known and novel) can be discovered by differentiable auction theory. It proves that two such auction formats satisfy a condition called ‘mode connectivity’: epsilon-mode connectivity is satisfied when two (local) solutions are connected by paths in parameter space. Strengths: **Originality**: to my knowledge, this work is novel. **Quality**: this seems well executed. **Clarity**: well-written. **Significance**: the overall significance of the paper is hard to assess. Insofar as it provides a theoretical basis for existing empirical results, I feel that the bar for significance is cleared. Weaknesses: My main concern about the paper is that it identifies a perplexing regularity - that both the RochetNet and AMA results require "five pieces" - but then does not seek to understand that. Minor: - p.1: DSIC printed as DISC - l.79: “straight-jacket” or “strait-jacket”? - l.135: “each in unit supply” clearer - l.166: whenever I see “it is … easy to see”, I want to see at least a hint of explanation: I’ve tried to cover over results that I suspected to be true (but later discovered weren’t) this way. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Dominant strategy incentive compatibility is a strong concept. I would appreciate a few sentences on what that buys them relative to e.g. Bayesian-Nash incentive compatibility. - l.66: re: the link between epsilon-dropout stability and epsilon-mode connectivity, is it useful to think in terms of a Lipschitz constant? - l.92 mentions an example with 10,000 neurons and 59 active options: can this be expressed in terms of K? - lines 126-127: why do the menu options define a “piecewise linear surface of utilities”? - My overall intuition from this is that large neural networks can overfit: a large number of neurons could, further, yield inactive options, which (as they are dominated?) can be thrown away without loss. Is something like this correct? If so, it would be worth emphasizing that. - My guess is that we need no more than one ‘contract’ per type of bidder: is this correct? If so, it should be mentioned (if it hasn’t been). - To me, the most curious result is the “five pieces” element of some of the RochetNet and AMA theorems: what explains this? How much more general is it? (Thus, could the results of S3, S4 be presented as special cases?) Can anything more specific be said about the pieces? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for carefully reading and assessing our paper and for the valuable feedback. We provide a detailed response to the raised issues / questions below: > p.1: DSIC printed as DISC A: Will be fixed, thanks! > l.79: “straight-jacket” or “strait-jacket”? A: This should be “straight-jacket”. We will add a reference for this. > l.135: “each in unit supply” clearer A: Will be added. Thanks! > l.166: whenever I see “it is … easy to see”, I want to see at least a hint of explanation: I’ve tried to cover over results that I suspected to be true (but later discovered weren’t) this way. A: The auction with a menu is DSIC, as the mechanism will provide the option with the buyer's maximal utility given the bid reported. Therefore, the buyer has no incentive to misreport. Questions: > Dominant strategy incentive compatibility is a strong concept. I would appreciate a few sentences on what that buys them relative to e.g. Bayesian-Nash incentive compatibility. A: For single buyer cases (i.e. RochetNet), Bayesian-Nash incentive compatibility is equivalent to dominant strategy incentive compatibility. They become different when there are multiple buyers (Affine Maximizer Auctions). For example, in the paper "Dominant-Strategy versus Bayesian Multi-item Auctions: Maximum Revenue Determination and Comparison" by Andrew Chi-Chih Yao, it is shown that these two concepts are different in the case when there are two buyers and two items. We want to emphasize that dominant strategy incentive compatibility is more robust than Bayesian one, as Bayesian-Nash incentive compatibility needs full knowledge of buyers' valuation distributions. > l.66: re: the link between epsilon-dropout stability and epsilon-mode connectivity, is it useful to think in terms of a Lipschitz constant? A: We are not exactly sure how to understand this question. Here is an attempt of an answer, please let us know if this is helpful: Lipschitz constant indicates that, for any point x, the function value does not change a lot within a neighborhood of x. However, when considering two distant points x and y which are far from each other, the Lipschitz constant does not provide a strong restriction on the function value along any path connecting x and y. It is possible that x and y are local minima of two valleys, and any path from x to y will encounter very high function values. In contrast, epsilon-mode connectiveity ensures this will not happen. > l.92 mentions an example with 10,000 neurons and 59 active options: can this be expressed in terms of K? A: Yes, K would be 10,000 in this example, and it seems that much fewer than all K menu options (namely only 59) are active. > lines 126-127: why do the menu options define a “piecewise linear surface of utilities”? A: We consider the relationship between valuation v and the utility u. According to the definition of RochetNet (see Section 2.1), $u = \max_k v^\top x^{(k)} - p^{(k)}$. This function u is a piecewise linear function in terms of v. > My overall intuition from this is that large neural networks can overfit: a large number of neurons could, further, yield inactive options, which (as they are dominated?) can be thrown away without loss. Is something like this correct? If so, it would be worth emphasizing that. A: From what we understand, your intuition is almost correct, and this is actually the main intuition behind Dughmi et al. [2014], which is the key tool in proving Theorem 9. However, one needs to be careful as it is possible that each neuron serves a very small region of valuation, and throwing these neurons may cause a large deviation of revenue. The way to deal with this is we also need to modify the remaining neurons after removing the others. Please let us know if this answers your questions. > My guess is that we need no more than one ‘contract’ per type of bidder: is this correct? If so, it should be mentioned (if it hasn’t been). A: Each buyer can only be assigned one option in the menu. However, even in the single-buyer case it might be crucial to offer the buyer many (for global optimality sometimes even infinitely many) options to choose from. > To me, the most curious result is the “five pieces” element of some of the RochetNet and AMA theorems: what explains this? How much more general is it? (Thus, could the results of S3, S4 be presented as special cases?) Can anything more specific be said about the pieces? A: The fact that our results work with five pieces stems from how our proofs work: we basically transform one menu into the other one in five linear steps. We do not know whether fewer than five pieces are possible, too. However, it also does not matter much: for us the most important message behind these five pieces is that it is a very simple structure: the path is not very complicated, it is just piecewise linear with very few pieces. It does not seem very relevant whether there are, say, 3, 5, or even 20 pieces. It is therefore not surprising, that both cases yield exactly 5 pieces: since the proof strategies are similar, the results are similar, too. It is possible that the results of S3 and S4 could both be derived as special cases of something more general, but we think that this more general framework, if it exists, would be very complicated and unintuitive. It is probably more valuable to have a clean, short, readible proof that only works for RochetNet, and a separate, more technical version for AMA, as we present it in this paper. --- Rebuttal Comment 1.1: Comment: Thank you. This reply maintains my conviction that this is publishable at NeurIPS. Very minor comments: 1. I was being understated above: it really should be 'straitjacket'; q.v. https://en.wikipedia.org/wiki/Straitjacket. ('Strait' in this sense meaning 'narrow': e.g. the Straits of Gibraltar, Magellan, Malacca, Hormuz...) 1. on line 92, with $k=10000$: I wanted to see the formula, in $k$, mapping to 59 active options. 1. on offering infinitely many options, maybe I'm missing the obvious: from an economic theory point of view, we should only need to offer as many options as there are possible types. Do we need to offer _more_ than that? If so, this (I would guess) is related to the learning requirements of escaping from non-convexity? If that's right, it could make sense to mention that the number of menu items partitions into requirements for efficiency/separating equilibria (e.g. one per type) and those for learning. 1. on the five pieces present in each proof, I'm not convinced that this is just an artefact of your proof technique - although you may be right. I'd love there to be a tighter reason, but accept that this may have to wait for subsequent papers. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the follow-up comments. 1. In fact, this concept is called straight-jacket auction and not straitjacket auction in previous auction design literature, see https://arxiv.org/abs/1404.2329, where this concept was introduced. 2. We understand the experiments by Dütting et al. (2019) more as qualitative justification for the assumptions of some of our theorems, and do not expect the number of active neurons to follow a precise formula in practice. Nevertheless, in order to be $\epsilon$-reducible with $K=10000$, the number of active neurons (with high probability) should be at most $\sqrt{K+1}=\sqrt{10001}$, which is approximately $100$. In the experiments by Dütting et al. only $59$ were active, which justifies that our assumption of $\epsilon$-reducibility is reasonable in practical cases. 3. In fact, even if our intuition might tell us otherwise, the optimal menu might require infinitely many options already in very simple settings with only a single bidder and only two different items (see Section 5.6 in Dütting et al. (2019)). This is a structural result and totally independent from any optimization algorithm or learning approach used. Nevertheless, your intuition about escaping from non-convexity by offering a large number of options is correct: this is exactly the idea behind our Theorems 9 and 13 and also seems to work in practice (compare once again the experiments by Dütting et al. (2019)). 4. We suppose this remains a mystery until someone solves it. ;-)
Summary: This paper studies the mode connectivity for specific neural network architecture, i.e., RochetNet and Affine Maximizer Auctions(AMA), where the local optimal solutions produce close local optimal solutions. To be specific, - for linear utilities, $\epsilon$-reducible solutions implies $\epsilon$- mode connectivity. - for $n$ items and linear utilities, for large menu options $K$, then $\epsilon$-mode connectivity holds between any solutions for any distribution. - Similar results also holds for AMA. Strengths: - The problem studied is a heated topic, and is a really interesting interdisciplinary area between auction design and neural network design. - The result is quite novel, and establish some difference against existing work on mode connectivity. Weaknesses: The network architecture difference of existing work and the neural network studied in this paper needs further clarification, and more emphasis. The current presentation of this paper needs a major revision, especially on our contributions sections. To name a few: - There are some concepts that refer to later section of the paper, and the reviewer strongly suggests moving these definitions (or create an informal version) to the introduction section for readability, e.g., $\epsilon$-reducible appeared for line 85, 89, 110, affine maximizer for line 103. - line 73-83 doesn't include the results for this paper, but rather a brief intro of RochetNet. - For reviewers unfamiliar with the word "options", it's really hard to understand the sentence in line 75. "single hidden layer where the neurons directly correspond to menu options". - The reviewer strongly suggests to put figure 1 in the intro section, and adding more details to the figure would helps a lot of the understanding of the paper. - In section 1.1, many results/theorems are mixed with existing results, which is quite confusing. - The definition of mode connectivity doesn't appear until page 6 of the paper... Some other minor comments: - line 21, please replace "DISC" by "DSIC". - line 79, "straight-jacket auction" should be supported by citations. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: - What's the definition of the word "option" studied in this paper? - Does this $\epsilon$ scale with the upper bound of the distribution, of the distribution is not in $[0,1]$, but in $[0,H]? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 1 poor Contribution: 3 good Limitations: This paper is theoretical, hence there are no experiment results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for carefully reading and assessing our paper and for the valuable feedback. We regret that the reviewer perceives the presentation of our paper as poor. We are thankful for the constructive feedback and will do our best to address the issues raised by the reviewer in the final version. We provide a detailed response below. > The network architecture difference of existing work and the neural network studied in this paper needs further clarification, and more emphasis. A: We explain these differences in lines 118 to 133 of the initial submission. We also provide formal definitions of the considered architectures in Sections 2.1 and 2.2. Nevertheless, we understand that we should emphasize these differences more (see also our response to reviewer aV1n), and will do so in the final version. > There are some concepts that refer to later section of the paper, and the reviewer strongly suggests moving these definitions (or create an informal version) to the introduction section for readability, e.g., $\varepsilon$-reducible appeared for line 85, 89, 110, affine maximizer for line 103. A: Thanks for the suggestions. We will move corresponding (informal) definitions to earlier points in the paper. > line 73-83 doesn't include the results for this paper, but rather a brief intro of RochetNet. A: We agree, but we feel that for a smooth reading flow, it is necessary to provide this brief intro here to understand our contribution. It is not long enough to be moved into a separate (sub)section. See also our explanation further down. > For reviewers unfamiliar with the word "options", it's really hard to understand the sentence in line 75. "single hidden layer where the neurons directly correspond to menu options". A: The word option (an allocation and a price offered to the buyer) is explained in the very same line after the colon. In the final version we will reformulate the sentence to make it more understandable. See also our response to the related question below. > The reviewer strongly suggests to put figure 1 in the intro section, and adding more details to the figure would helps a lot of the understanding of the paper. A: The purpose of Figure 1 is to illustrate the formal definition of the RochetNet. Going into such great detail in the intro already seems to be a bit over the top for us. If the reviewer insists, we can add another, simplified version of the figure to the intro in the final version. > In section 1.1, many results/theorems are mixed with existing results, which is quite confusing. A: Our impression is that providing the context (previous results) while presenting our own contributions is more important than a clear visual separation. We still carefully state which results are our own ones. If the reviewer insists, we can try to separate this more, but we fear that this makes it harder to understand the context. Our paper aims to provide a theoretical explanation for the existing empirical success of mechanism design. Therefore, to effectively convey our contribution, we believe it is necessary to first present the key details of the existing empirical results and introduce our theoretical findings subsequently. > The definition of mode connectivity doesn't appear until page 6 of the paper... A: It is defined informally in the mode connectivity paragraph. To make it visually more clear, we will convert it to a definition environment in the final version. Some other minor comments: > line 21, please replace "DISC" by "DSIC". A: Will be fixed. Thanks! > line 79, "straight-jacket auction" should be supported by citations. A: We will add a citation. Thanks! Questions: > What's the definition of the word "option" studied in this paper? A: An option is a pair consisting of an allocation and a price offered from the seller to the buyer. We define this in line 75 of the original submission. As stated above, we will reformulate this sentence in the final version to avoid confusion. A set of several options is called a menu. For example, in the two-items case, a menu with two (non-trivial) options could be as follows: the first option is to get the first item and pay 5 dollars. The second option is to get both items and pay 10 dollars. Furthermore, we always assume that the buyer also has the option to buy nothing and pay nothing. Out of all options, the buyer needs to choose a single one. > Does this scale with the upper bound of the distribution, of the distribution is not in $[0, 1]$, but in $[0,H]$? A: Yes, it does. Then, one needs to replace $\varepsilon$ with $H \varepsilon$. --- Rebuttal Comment 1.1: Title: Acknowledgement Comment: I agree that the theoretical contribution of this paper is non-trivial, and this is not a special case of the existing mode connectivity literature. I updated my score accordingly, but I strongly suggest the author to pay detailed attention to the presentation.
Summary: The starting point for the paper is recent work in the area of “differentiable economics”, in which high-revenue strategyproof auction mechanisms are found by optimizing parameters using machine-learning-inspired gradient descent techniques. The authors consider the problem of selling multiple goods to a single buyer. In such setting, strategyproof mechanisms can always be identified with a “menu” of (allocation, price) pairs — the bidder chooses their best choice from the menu, and carefully optimizing the menu can increase revenue. The neural architecture that represents these menu items is called “RochetNet”. They also consider multi-buyer versions of this problem, in particular searching through the space of affine maximizer auctions (AMA), which are structurally similar (there is a “menu” of possible outcomes, and a “boost” which roughly plays the role of the price, although actual per-bidder payments are calculated according to a VCG-style rule). In these classes of auctions, the performance goal (revenue) is a very non-convex function of the auction parameters. Nevertheless, first-order optimization seems to work well in finding good or even known-optimal mechanisms. Also, empirically, allowing the auctions to learn over thousands of menu items helped performance, even though at the end of training, only a handful of these menu items were actually used (and in some cases known-optimal mechanisms might only have 3-4 menu items). The authors of this paper aim to explain these phenomena. They build on existing work in deep learning for more standard tasks, where things work similarly: even though loss landscapes are not convex, it is possible to optimize over them using first-order methods, and overparameterizing the neural networks seems to help with this. One recent direction of theoretical work aims to explain these phenomena using the concept of “mode connectivity” — there are various results showing that if it is possible to remove lots of neurons from a trained network and get almost the same performance, then any two such solutions must be connected by a continuous path where at any point along the path, the loss function is within some constant of the two solutions. Presumably, loss landscapes with such a property should be easier to optimize over. The authors establish similar properties for the two types of auction architectures. In particular, between any two menus where there is a small subset of options that would be chosen by a bidder with probability 1-\epsilon, embedded in a much larger unused or redundant menu, they show that (epsilon) mode connectivity holds. Additionally, any pair of menus with sufficiently large (as a function of epsilon and the number of items) menus is also epsilon-mode connected. Analogous results hold for AMAs.This provides a nice theoretical explanation of why these auctions are unexpectedly easy to optimize successfully, and why using large menus may help optimization. Strengths: Two separate papers in this area observed an interesting and useful empirical phenomenon but gave no good explanation for it. This paper gives a very solid explanation, and is extremely interesting for that reason. It also will hopefully motivate better designs and learning techniques for strategyproof auction architectures. Weaknesses: Currently I don’t see any significant weaknesses. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: I recommend further discussion of Shen et al. “Automated mechanism design via neural networks”, which presents an architecture that for certain cases is equivalent to RochetNet. It’s cited at one point but the actual neural architecture merits discussion (it also encodes menu items). Is there an explicit explanation of exactly why mode connectivity helps optimization? Maybe it’s out there in the literature already. If so I think it’s worth devoting a few sentences to this. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Limitations are adequately addressed — in particular, that these techniques probably won’t extend to even more complicated/flexible auction architectures — they rely on the relatively simple “menu based” or “list of allocations + boost” structure of the particular architectures in question. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for carefully reading and assessing our paper and for the valuable feedback. > I recommend further discussion of Shen et al. “Automated mechanism design via neural networks”, which presents an architecture that for certain cases is equivalent to RochetNet. It’s cited at one point but the actual neural architecture merits discussion (it also encodes menu items). A: Thanks for pointing out this. We will incorporate further discussion of Shen et al. into our final version. MenuNet, developed by Shen et al. also encodes menu items and it generalizes RochetNet as it allows non-linear utility functions and can incorporate other networks trained from interaction data. One difference between MenuNet and RochetNet is their handling of valuation distributions. RochetNet repeatedly samples the valuations from the underlying distribution, whereas MenuNet discretizes the buyer's valuation space to discrete values and treats all possible discrete valuations as a collective input. Of course, in those cases where RochetNet and MenuNet are equivalent, our results directly apply to MenuNet, too. However, for general, non-linear utilities, we believe that it is interesting to obtain similar results, but this might be much more challenging. > Is there an explicit explanation of exactly why mode connectivity helps optimization? Maybe it’s out there in the literature already. If so I think it’s worth devoting a few sentences to this. A: We will add such a discussion to the final version. Basically, the intuition why mode connectivity is desirable is the same as for previous mode connectivity results. In the following we outline our perspective on this. Mode connectivity can help to explain the empirical performance of stochastic gradient descent (sgd) (or ascent, in case of revenue maximization). To some extent, mode connectivity prevents a poor local minimal valley region on the function value, from which the sgd method cannot escape easily. Suppose such a bad local minimum exists. Then mode connectivity implies that there exists a path from this bad local minimum to a global minimum on which the loss function does not significantly increase. Therefore, the intuition is that from every bad local minimum, a (stochastic) gradient method would be able to find a way to escape. In fact, such a very bad local minimal valley region does appear in the auction setting when the menu size is 1 (see Appendix C), which seems to counter the empirical performance of achieving the global optimum eventually. This paper proves that this will not appear when the menu size is big. However, note that the mode connectivity cannot fully justify the success of gradient descent, and there is a gap here, which also exists in the previous work on mode connectivity for neural networks. Mode connectivity only suggests that the local search algorithms are unlikely to get completely trapped. --- Rebuttal Comment 1.1: Title: thanks Comment: Thanks for responding to each of these questions.
Summary: This paper studies the mode connectivity in neural networks for design auction mechanisms. Specifically, the authors prove that locally optimal solutions are connected by a simple, piecewise linear path such that every solution on the path is almost as good as one of the two local optima. Strengths: The paper proposes to address a significant problem: how to design an auction mechanism via machine learning. The proof is given in detail and looks all correct. The presentation is clear and well-structured. Weaknesses: The novelty and significance are not clear. Is the mode connectivity in auction mechanism just a special case of the mode connectivity of neural networks in general? Following this intuition, the proof seems straightforward: The functional from a neural network to its corresponding revenue is a bounded (continuous) functional; the functional from this neural network to its training loss is also a bounded functional. Thus, it is a continuous function from training loss to revenue. Therefore, the mode connectivity of neural networks in term of the training loss implies the mode connectivity of auction mechanism in term of revenue. Some may argue that these bounded/continuous properties may not always hold, but given the neural networks are usually trained by SGD or its variants, these properties should stand. ------ POST-REBUTTAL: my concerns are addressed. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Please address the weakness above. I would like to increase the score if it can be cleared. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: Please see the weaknesses above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for carefully reading and assessing our paper and for the valuable feedback. As we state in the paper already, we do not believe that mode connectivity in auction settings follows as a special case from general mode connectivity results. We will make this clearer in the final version. In response to the question raised by the reviewer, in the following, we explain (i) why we think the argument suggested by the reviewer is not sufficient to deduce mode connectivity for auction design from previous mode connectivity results and (ii) why we do not think that a similar reasoning is possible at all. (i) We are not sure what exactly the reviewer means by "training loss" in the suggested argument, but we suppose it is some loss function on some training data in a setting for which mode connectivity is known. We would like to point out that this setting is structurally very different from our setting, in which training data does not really come as labeled pairs. Instead, in the auction setting, the expected revenue is directly maximized via stochastic gradient ascent by sampling from the valuation distribution. Moreover, even if one were able to define a meaningful notion of "training loss" as suggested by the reviewer, for which mode connectivity holds by previous results and which depends continuously on the network parameters, the reasoning suggested by the reviewer does not work at two different points: 1. the revenue is NOT a continuous function of the network parameters due to the involved, discontinuous argmax. 2. Even if it was, there cannot exist a well-defined mapping from "training loss" to revenue (or vice versa): different values of revenue would correspond to the same values of "training loss" and vice versa. Therefore it does not even make sense to speak about continuity of such a mapping. (ii) More generally, deducing mode connectivity for auction settings from previous mode connectivity results appears to be infeasible for the following reason. To our best knowledge, previous works on the mode connectivity of neural networks crucially rely on the properties that the networks minimize a convex loss function between the predicted and actual values, and require linear transformations in the final layer. However, these properties do not hold in RochetNet and AMA networks, where the structures are fundamentally different. For example, the loss function of RochetNet is not even a function of the network's output, but is defined by choosing the price of the argmax option, which is neither a linear transformation nor convex. We do not see how previous techniques can be applied here with a simple modification. Actually, other than mode connectivity, many properties or characterizations of neural networks cannot be applied to auction settings. For example, we know, for neural networks, the optimal training loss tends to 0 with sufficient over-parameterization. In contrast, the optimal revenue of auction mechanisms is extremely challenging: it remains largely unknown for most buyers' valuation distributions, even in the RochetNet case. --- Rebuttal Comment 1.1: Comment: Thanks for your response. Most of my concerns are cleared.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper focuses on justifying the empirical success of this differentiable economics, particularly in the context of menu-based methods like RochetNet and Differentiable AMA auctions. The authors introduce the concept of the $\epsilon$-mode connectivity property, which establishes that two locally optimal menus are connected by a simple path where the revenue loss along the path is at most $\epsilon$. The paper demonstrates that this property holds true under two conditions: - If the valuations are normalized, and the menus are $\epsilon$-reducible, meaning that there exists a small subset of menus that are active for a buyer with a probability of $1-\epsilon$. - If the number of menu options is sufficiently large. Strengths: **Significance:** Although menu-based approaches for differentiable economics show promising empirical performance, there is a lack of understanding regarding their theoretical properties. This paper takes the first stride towards exploring these theoretical aspects. **Originality:** While mode-connectivity has primarily been studied in the context of prediction problems involving convex losses with linear transformations in the final layer, RochetNet and Differentiable AMA minimize negated revenue loss, which involves more complex calculations. This fundamental difference sets them apart in their analysis. **Clarity and Quality:** The paper is well written and easy to follow. Weaknesses: - This paper would benefit a lot from a discussion on how exactly mode connectivity can be used to justify empirical performance. While I agree that this is a cool theoretical property, it is very unclear to me how this exactly would help - There is also no discussion/comments what practitioners should make of this property at all - (are there any insights on how to initialize these networks better to break permutation symmetry of menus etc or anything that would help improve training) - The optimal allocations are not necessarily finite (this assumption isn't state clearly wherever used). See Example 3 in [*] [*] Daskalakis, C., Deckelbaum, A., and Tzamos, C. (2017). Strong duality for a multiple-good monopolist. Econometrica, 85:735–767. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Please see Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for carefully reading and assessing our paper and for the valuable feedback. In the following we respond to the three questions/weaknesses raised by the reviewer. >This paper would benefit a lot from a discussion on how exactly mode connectivity can be used to justify empirical performance. While I agree that this is a cool theoretical property, it is very unclear to me how this exactly would help A: We will add such a discussion to the final version. Basically, the intuition why mode connectivity is desirable is the same as for previous mode connectivity results. In the following we outline our perspective on this. Mode connectivity can help to explain the empirical performance of stochastic gradient descent (sgd) (or ascent, in case of revenue maximization). To some extent, mode connectivity prevents a poor local minimal valley region on the function value, from which the sgd method cannot escape easily. Suppose such a bad local minimum exists. Then mode connectivity implies that there exists a path from this bad local minimum to a global minimum on which the loss function does not significantly increase. Therefore, the intuition is that from every bad local minimum, a (stochastic) gradient method would be able to find a way to escape. In fact, such a very bad local minimal valley region does appear in the auction setting when the menu size is 1 (see Appendix C), which seems to counter the empirical performance of achieving the global optimum eventually. This paper proves that this will not appear when the menu size is big. However, we agree that the mode connectivity cannot fully justify the success of gradient descent, and there is a gap here, which also exists in the previous work on mode connectivity for neural networks. Mode connectivity only suggests that the local search algorithms are unlikely to get completely trapped. > There is also no discussion/comments what practitioners should make of this property at all - (are there any insights on how to initialize these networks better to break permutation symmetry of menus etc or anything that would help improve training) A: We see the main contribution of our paper in explaining the empirical success and providing theoretical foundations for already existent practical methods, and not in inventing new methods. Nevertheless, two insights a practitioner could use are as follows: (i) It is worth understanding the structure of the auction in question. If one can, e.g., understand whether $\epsilon$-reducibility holds for a particular auction, this might indicate whether RochetNet or AMA are good methods to apply to this particular case. (ii) Size helps: If one encounters bad local optima, increasing the menu size and rerunning RochetNet/AMA might be a potential fix (and will eventually lead to a network satisfying mode connectivity). > The optimal allocations are not necessarily finite (this assumption isn't state clearly wherever used). See Example 3 in [...]. A: We agree, and we do mention this in our paper, e.g. in the second paragraph of the introduction: "Already for two items and a single buyer, the description of the optimal mechanism may be uncountable [...]." However, our statements are not missing any assumptions as our proofs do not require the optimal allocation to be finite. Even if the optimal allocation is infinite, mode connectivity between any two considered menus (stemming from RochetNet or AMA) holds under the stated assumptions. --- Rebuttal Comment 1.1: Title: Updated my scores Comment: Thanks for the clarification. I have updated my scores - I hope the authors will add the required discussion in their final version.
null
null
null
null
null
null
TopP&R: Robust Support Estimation Approach for Evaluating Fidelity and Diversity in Generative Models
Accept (poster)
Summary: The paper observes that the existing metrics for evaluating fidelity and diversity of generative models can be unreliable when outliers or non-IID perturbations are present. Thus, the authors propose a new metric based on Kernel Density Estimation under topological conditions that is more robust in the earlier scenarios. The main contributions of this paper are the metric that is robust to outliers and theoretical guarantees for robustness. Strengths: The paper assesses a very important topic of evaluating generative models and proposes a metric that provides robust estimates of sample fidelity and diversity in several toy/synthetic examples. Additionally, bringing concepts from topological data analysis (TDA) to evaluation of generative models might open new future directions in metric research. Weaknesses: The paper is difficult to follow for readers that are not very familiar with TDA and it could be improved significantly by adding intuition and a visual introduction to the most important concepts TDA and precision and recall. Furthermore, the concept of persistent homology, persistence diagram and confidence band estimation are discussed on a general level in Sec. 2 but the discussion seems disconnected how these are related to the proposed metric – adding concrete and intuitive examples would make this section more approachable. Unfortunately, I believe that in the current state paper is not very accessible to a wider audience. The novelty of the work is challenging to evaluate and potentially limited. In particular, the proposed metric seems to rely on the removal of outliers before evaluating the Top P&R, and the Top P&R part of the metric seems quite similar to previous work. Ideally, I would hope to see real world use cases with state-of-the-art models, where the metric provides more reliable evaluation (now Tab. 1 Top P&R yields very similar results to improved P&R). Technical Quality: 3 good Clarity: 1 poor Questions for Authors: 1. The authors claim that robustness is required in practical evaluations of models since these cases are “wild”, i.e., might corrupt/outlier samples or the distributions might be sparse without density. I am curious as to why the proposed metric seems to perform similarly as improved PR with real generative models in Tab. 1? Also, Tab. 1 does not tell which dataset these models were trained and evaluated on. Since the proposed metric is composed of an outlier removal and the Top P&R evaluation, how much of the performance in toy/synthetic datasets can be explained by the outlier removal part? Would adding a similar outlier removal to existing improved PR and DC improve their performance in the toy experiments? 2. I think detecting outliers might prove to be difficult in practice, compared to the toy experiments presented in the paper. Real datasets do not ground-truth “outlier labels” and in some datasets, e.g. in medical imaging, the distribution has a very long tail as there might be more data from patients without a certain disease than with the disease. In this case, modeling the patients with the disease might be more interesting than modeling patients without the disease. However, when evaluating Top P&R the contributions of the patients with the diseases might be suppressed as they are deemed as outliers. 3. How does the dimensionality of the random projection affect the performance of the metric? 4. Is the bandwidth parameter $h$ different for each model in Tab. 1? If so, is it possible that some models obtain artificially better scores because their outlier removal is different than for the other models? Are Top P&R scores with different bandwidth parameters directly comparable? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 1 poor Contribution: 2 fair Limitations: The authors adequately addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Reviewer 5 **R5-1. Dataset used for Table 1:** (G1-2) CIFAR-10. We apologize for not mentioning the datasets. We revised the caption of Table 1 in the revised paper. **Reason why TopP&R shows similar performance to P&R in Table 1:** (G1-3, G1-4). TopP&R shows better performance than the others when we apply it to more large-scale datasets, such as ImageNet and Baby ImageNet. **Application of outlier removal to improved P&R and D&C:** We had already compared P&R and D&C with TopP&R by applying the outlier removal method (e.g., Local Outlier Factor or Isolation Forest) to these methods in Section I.2. The experiments were conducted using the same setup as the experiments depicted in Figure 4 of the main text. By examining the results in Figures A2 and A3, it becomes evident that applying the outlier removal method to P&R and D&C still does not make them robust to noise. &nbsp; **R5-2. How TopP&R behaves with minority sets (e.g., long-tailed distributions):** If we correctly understand the reviewer's question, it pertains to whether TopP&R can effectively capture differences when there are variations in a minority set of patient data samples within a larger patient dataset, where only a small number of samples exhibit a specific disease. In our **Section I.1** of Appendix, we have already conducted experiments comparing how various metrics, including TopP&R, respond when changes occur in the minority set. This comparison was carried out to assess how different metrics react to variations in such a scenario. In **Table A11 of Section I.1**, our utilized data assumes a situation where out of a total of 10 classes, six data classes are majority classes, containing a substantial number of samples, while the remaining four minority classes have relatively small data counts, resulting in a long-tailed data distribution. In this context, we conducted experiments by further reducing the number of samples within the minority set, aiming to observe how TopP&R and other metrics respond to even the slightest changes in this distribution. Contrary to the reviewer's anticipation, the experimental results revealed that, among the metrics, TopP&R is the most sensitive in reflecting evaluation values influenced by subtle differences in the distribution. As detailed in Section C in Appendix, due to the inherent nature of TopP&R, if the minority set possesses a topologically and statistically significant data structure, these data points are deemed crucial features in approximating support. Therefore, considering all minority sets as outliers when estimating TopP&R’s support is an incorrect notion. &nbsp; **R5-3. The impact of changing the dimension of random projection on TopP&R:** (G2-2). &nbsp; **R5-4. Comparison of TopP&R scores across different bandwidth parameters:** (G3) In addition to our responses regarding $c_\alpha$ and $h$ in the general comments, $h$ being too small leads to the KDE with unstable topological features. On the other hand, $h$ being too large leads to the KDE with its topological features dissolved, so the choice of $h$ cannot be arbitrary but should be proper. **Whether certain models can achieve better performance due to different settings of the bandwidth parameter in Table 1:** (G1-2, G1-3) As stated in the response to the general comments, we have demonstrated that TopP&R provides a generally consistent ranking with metrics such as FID and KID, which are commonly used to assess performance differences among models in the field of generative modeling. &nbsp; **R5-W1. The paper is not easily understandable for individuals unfamiliar with TDA (+ more discussion between confidence band and proposed metric):** We can relate to the feedback provided by the reviewers, as this aspect was a significant concern during the paper writing process. Considering the limited page and the nature of the conference to which we submitted the paper, our approach was to structure the paper in a way that enables readers to understand the robustness of TopP&R at a high-level without getting entangled in intricate details of TDA. However, it is important to note that some aspects of TDA have been intentionally condensed in the main paper. For readers seeking a more comprehensive understanding of our work, we have provided the detailed explanation of TDA in the appendix. Furthermore, in response to the concerns raised by the reviewer, we will strive to provide more detailed explanations and visual introductions to address the aspects that require additional intuition. Our aim is to enhance the paper by offering a more comprehensive and visually informative presentation. &nbsp; **R5-W2. Novelty of this work:** Our proposed method introduces a novel theoretical approach by combining the traditional topic of consistency in statistical inference with topological data analysis. While existing literature has explored consistencies under i.i.d. settings without considering noise, and consistencies under noise frame framework, the focus on statistical inference in geometrical or topological settings with noise is relatively rare (e.g., Genovese, Christopher R., et al. (2012)). Our contribution lies in bridging this gap and providing a framework that incorporates both the consistency of topological analysis and the consideration of noise. This not only encompasses the theoretical properties of our method, called TopP&R, but also extends to identifying an appropriate noise framework and adapting statistical inference techniques for geometrical and topological data analysis. &emsp;&emsp;\- Genovese, Christopher R., et al. "Manifold estimation and singular deconvolution under Hausdorff loss." (2012): 941-963. **Real world use cases with SOTA model:** (G1-3). --- Rebuttal Comment 1.1: Comment: Thank you for providing detailed answers to my feedback. I appreciate the authors efforts and acknowledge that examining and improving the evaluation metrics generative models is a significantly important topic. I am happy to update my score accordingly.
Summary: The paper presents a novel evaluation metric called Topological Precision and Recall (TopP&R) for generative models. Existing metrics often suffer from unreliable support estimation and yield inconsistent results. In contrast, TopP&R systematically estimates supports by retaining topologically and statistically significant features with confidence. It provides robust evaluation, accurately capturing changes in samples even in the presence of outliers and non-independent and identically distributed perturbations. The metric combines ideas from Topological Data Analysis and statistical inference, offering a reliable approach to support estimation. The paper includes theoretical and experimental evidence to demonstrate the effectiveness and robustness of TopP&R, making it a valuable contribution to the evaluation of generative models. Strengths: The paper introduces a novel evaluation metric, Topological Precision and Recall (TopP&R), specifically designed for generative models. This metric addresses the limitations of existing evaluation metrics by focusing on robust support estimation, which is crucial for accurately assessing sample quality. TopP&R provides a robust evaluation that is resilient to outliers and non-independent and identically distributed perturbations. The metric systematically estimates supports by retaining topologically and statistically significant features, ensuring reliable evaluation results even in challenging scenarios. The paper demonstrates that TopP&R offers statistical consistency in its results. By incorporating concepts from Topological Data Analysis and statistical inference, the metric provides a systematic approach to distinguishing signal from noise and effectively capturing relevant features in the data. The paper provides both theoretical and experimental evidence to support the effectiveness of TopP&R. The theoretical analysis establishes the robustness and consistency guarantees of the metric, while the experimental results demonstrate its superior performance compared to existing evaluation metrics. TopP&R is a versatile evaluation metric that can be applied to various generative models. Its systematic approach to support estimation makes it suitable for evaluating the performance of models across different domains and datasets. The authors provide code implementation for TopP&R, making it readily accessible for researchers and practitioners. This facilitates the adoption and reproducibility of the metric, contributing to the transparency and advancement of the field. Overall, the paper's strengths lie in its introduction of a novel evaluation metric, its robustness and reliability in assessing sample quality, and its theoretical and experimental validation, demonstrating its effectiveness and broad applicability in the evaluation of generative models. Weaknesses: The paper mentions Algorithm 1 in line 107 but does not provide the algorithm or any further explanation in the paper. This lack of clarity leaves readers confused and makes it difficult to understand the specific steps and procedures involved. Line 139 mentions the bootstrap bandwidth, but the paper does not clearly explain how this bandwidth is computed. It is important to provide details on the methodology used for computing the bandwidth to ensure transparency and reproducibility. The paper lacks a clear discussion on the dimensionality of the problem that the proposed method can handle. It does not explain how the authors choose the dimensionality of the space for random projections or how they ensure that these projections capture relevant underlying features. Additionally, the paper does not provide theoretical results supporting the choices made in Section 3.4 regarding dimensionality and random projections. In the experimental section, the authors choose a projection dimension of 32 without providing a clear justification or explanation for this specific choice. It is important to provide reasoning or empirical evidence to support the selection of this dimension and explain its impact on the performance of the method. While the paper includes simulations to demonstrate the robustness and effectiveness of the proposed method, it lacks a thorough evaluation on real-world data where changes are not simulated. It would be valuable to see the performance of the method in real data scenarios to assess its practical applicability and validate its effectiveness in capturing genuine changes. Addressing these weaknesses would strengthen the paper by providing clarity, offering detailed explanations for methodological choices, and conducting comprehensive evaluations on real-world data. Technical Quality: 3 good Clarity: 3 good Questions for Authors: No additional questions in addition to the weaknesses listed above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Reviewer 4 **R4-W1, W2. Explanation of line 107 and 139 (Algorithm 1):** Added. We apologize for missing the appropriate reference link to “Appendix H.2” for the algorithm. Unfortunately, this happened while we were breaking the manuscript into two (main and supplementary) documents for the submission. &nbsp; **R4-W3, W4 How the dimension of random projection was chosen:** (G2-2). &nbsp; **R4-W5. Evaluation on real-world data:** (G1). --- Rebuttal Comment 1.1: Comment: After reading all the rebuttals, I have updated my score. There are a lot of choices on parameters made in the experimental setup without proper guidance which leaves practitioners wondering on the appropriate choice of such parameters. The authors should justify such arbitrary choices more rigorously. --- Reply to Comment 1.1.1: Comment: **There seems to be a misunderstanding regarding the hyper-parameter. In our algorithm, hyper-parameters are not arbitrarily set.** As mentioned in the official comment G3, the confidence level $\alpha$ is not a parameter for tuning, and furthermore, the kernel bandwidth $h$ that we employed in all of our experiments is automatically approximated through Algorithm 2 (Section H.4). Furthermore, **all other P&R variants employ a hyper-parameter denoted as $k$ (in $k$-means clustering or $k$-NN)**, which plays a similar role to $h$ of our algorithm. Moreover, existing metrics (except D&C) neither provide criteria nor algorithms for determining the appropriate $k$, often arbitrarily set, utilizing values of $k$ that have empirically shown some effectiveness on certain datasets. On the other hand, TopP&R is a more stable approach compared to conventional algorithms, as it considers meaningful topological and statistical characteristics of the data based on the confidence band (algorithm 1). This unique property allows our metric to provide robust and reliable evaluation criteria.
Summary: The paper introduces TopP&R, a comprehensive evaluation metric for generative models that enhances the accuracy of sample quality assessment in comparison to existing metrics. TopP&R incorporates topological data analysis and statistical inference, effectively estimating supports through the application of Kernel Density Estimator (KDE) under topological conditions. This metric demonstrates remarkable robustness towards outliers, successfully detects changes in distributions, and provides consistency guarantees and robustness for high-dimensional data with minimal assumptions. Notably, TopP&R is the pioneering metric that prioritizes robust support estimation, ensuring statistical consistency even when confronted with noise. The paper presents compelling theoretical findings and experimental evidence that strongly validate the effectiveness of TopP&R. Strengths: The paper introduces a novel evaluation metric, Topological Precision and Recall (TopP&R), for generative models. It combines topological data analysis and statistical inference to estimate support robustly and detect distributional changes accurately. The paper provides theoretical and experimental evidence to demonstrate the reliability and consistency of the TopP&R metric. It accurately assesses sample quality, even in the presence of outliers, non-independent and identically distributed perturbations, and other noise conditions. The paper also presents a systematic approach to estimate support using Kernel Density Estimator (KDE) under topological conditions. The paper clearly explains the motivation, limitations of existing metrics, and the design and implementation of the TopP&R metric. It includes visualizations and examples to enhance clarity and uses clear language to facilitate understanding. The paper addresses the need for more reliable evaluation metrics for generative models. The proposed TopP&R metric fills this gap by providing a robust evaluation of sample quality and ensuring statistical consistency. Its findings have implications for improving the evaluation and development of generative models. Weaknesses: Limited accessibility: The paper may be challenging for readers unfamiliar with persistent homology and topological data analysis to grasp the methodology and results. Providing additional background information and explaining concepts in more detail would improve clarity. Limited experimentation: The experiments are confined to a few datasets and models. Conducting experiments on a broader range of datasets and models would demonstrate the effectiveness of TopP&R in diverse scenarios. Insufficient discussion of limitations: Although the authors mention some limitations, such as the requirement for a large number of samples to estimate the confidence band, a more comprehensive discussion of the limitations would help readers understand the scope and applicability of TopP&R. Recommendations for improvement: Critique existing metrics: Highlight limitations of commonly used metrics, such as Inception Score and Fréchet Inception Distance, due to unreliable support estimates from sample features. Address noise vulnerability: Explain how TopP&R overcomes vulnerability to noise by considering topologically and statistically significant features with confidence. Present experimental validation: Provide comprehensive experimental results comparing TopP&R to existing metrics, demonstrating its superior accuracy and consistency. Discuss real-world applicability: Explore practical implications and applications of TopP&R in various domains, showcasing its relevance beyond theory. Share open-source implementation: Make TopP&R's implementation publicly available, ideally on platforms like GitHub, to promote reproducibility, validation, and further advancements. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How does TopP&R handle the trade-off between precision and recall in evaluating generative models? 2. Can you provide more details on the noise framework and statistical inference used in TopP&R? 3. Have you considered comparing TopP&R to other evaluation metrics based on topological features, such as Persistence Landscape? 4. Can you provide more information on the implementation of TopP&R and its applicability to different types of generative models? 5. How does the choice of threshold for retaining topologically and statistically significant features affect the performance of TopP&R? 6. Have you explored other techniques, like Gaussian mixture models or non-parametric methods, for estimating the density function in TopP&R? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Based on my understanding, the limitations of the TopP&R method mentioned in this paper can be summarized as follows: 1. The method requires a large number of samples to estimate the confidence band accurately, which can be a challenge for datasets with limited samples. 2. The reliance on the choice of kernel function and bandwidth can affect the performance of TopP&R, and the need for a theoretical guarantee on the choice of bandwidth can be difficult to obtain in practice. 3. The method assumes a smooth density function and independence between samples, which may not hold for all types of data, and the reliance on topological features may not capture all aspects of the data distribution. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Reviewer 3 **R3-1. Trade-off between precision and recall:** To quickly check whether TopP&R exhibits trade-off between fidelity and diversity as in improved P&R, we used $\mathcal{X}\sim{\mathcal{N}(0,1)}$ and $\mathcal{Y}\sim{\mathcal{N}(0.6, \sigma^2)}$ with a sample size of 10k and 32 dimensions. We compared the metrics while gradually increasing $\sigma$ from 0.7 to 1.3. The results of the experiment are shown in the table below. | $\sigma$ | 0.7 | 0.85 | 1.0 | 1.15 | 1.3 | |----|------|------|------|------|------| | Precision | 0.85 | 0.61 | 0.36 | 0.11 | 0.04 | | Recall | 0.0 | 0.05 | 0.22 | 0.69 | 0.89 | | TopP | 0.93 | 0.68 | 0.28 | 0.04 | 0.01 | | TopR | 0.0 | 0.01 | 0.16 | 0.84 | 0.99 | &nbsp; **R3-2, W4. Provide more details on the noise framework and statistical inference:** Added. Noise refers to samples with low statistical probability values, not contributing significantly to the overall distribution. We discussed two non-IID noise types in Section 5.1.3 and 5.2.2: scatter noise and swap noise. Given the scatter noise is derived by adding noise sampled from a uniform distribution across all data dimensions, its influence remains limited due to its low probabilities. Consequently, evaluation metrics are expected to remain relatively stable even when the level of scatter noise increases. On the other hand, swap noise involves randomly selecting samples from two sets and exchanging their classes. At first, only a few swaps may not form a meaningful distribution, as the swapped samples don’t necessarily represent the distribution of the opposite set. However, as the level of swap noise increases, it can lead to the formation of significant distributions due to the heightened probability of class swaps. **How TopP&R overcomes vulnerability to noise:** The significant data structures possess meaningful probability values, while the opposite exhibits lower values. In this context, TopP&R approximate robust data support identifying significant data structures by the bootstrap band $c_\alpha$. In our paper, we have provided a detailed explanation of the noise framework we employed and described how our algorithm should effectively respond to such noise. &nbsp; **R3-3. Metrics utilizing topological features:** To the best of our knowledge, our paper compares all the metrics utilizing topological features (e.g., GCA and MTD). We did not include a separate comparison with previous metrics such as Geometry Score (Khrulkov et al. 2018), as they are known to exhibit inferior performance compared to the modern topological feature-based metrics like MTD, and are also notoriously slow in computation, making them impractical for real-world usage. It is possible to define metrics using methods like the persistence landscape. For instance, one could construct landscapes using the filtration method for both real and fake datasets, and then define a new metric using statistical tests such as the t-test on the real and fake features. However, these methods utilizing persistence landscapes focus only on topological structures and forget distributional features not captured by persistence landscapes. As a result, **they may not effectively capture geometric transformations like translation in distributions**. Due to these characteristics, it is challenging to consider such methods particularly suitable as metrics for comparing generative models. &nbsp; **R3-4, W6, W7. Applicability to different types of generative models:** (G1-2, G1-3) TopP&R can be applied to all fields where generative models are used. For example, TopP&R can evaluate models that produce sound signals using appropriate networks to obtain features. Moreover, when comparing distribution similarity between various numeric data, TopP&R can be directly used even without an embedding network, treating each observation value as a feature. **Details of implementation, Code release:** (Table B2, Introduction) Please note that we have already provided the link to our code at the end of Section 1. Our metric primarily utilizes the Numpy, sklearn, and scipy libraries, and we utilize the linear layer provided by Pytorch. The metric computation is achievable through CPU operations. In terms of computation speed, our metric requires approximately 1 min 58 seconds of wall clock time to compare a real and fake dataset with a sample size of 10k. This indicates that our approach offers an appropriate calculation speed for effectively comparing generative models. Moreover, our algorithm is very user-friendly, as conducting experiments using our method only requires a simple one-line of code. &nbsp; **R3-5, L2. Choice of threshold:** (G3). &nbsp; **R3-6. Other techniques for estimating density:** (G4). &nbsp; **R3-W1. Limited experimentation:** (G1). **Addressing accessibility:** Given the constraints of limited page space and the conference’s nature, we designed the paper to allow readers to grasp the robustness of TopP&R at a high-level without becoming entangled in intricate TDA details. However, we want to emphasize that certain aspects of TDA were intentionally condensed in the main paper. For readers seeking a deeper understanding, we have provided a comprehensive TDA explanation in the appendix. We will make diligent efforts to furnish more detailed explanations to address areas that may require further clarity. &nbsp; **R3-W2, L1. Sample size required for estimating confidence band:** (G1-1, Table B2) TopP&R uses 10k real and fake samples, identical to all other metrics, and does not require a larger sample size to demonstrate its performance. In addition, TopP&R is efficient to compute (Table B2). &nbsp; **R3-W3. Critique existing metrics:** We have revised the manuscript to better emphasize the aspects suggested by the reviewer in our experimental results. &nbsp; **R3-W5. Experimental validation (showing TopP&R’s accuracy and consistency):** (G1, G2-1).
Summary: The paper discusses robust estimation of precision and recall in high dimensional and potentially noisy data. The approach relies on using kernel density estimation to approximate the underlying distributions and using a bootstrap estimation of confidence interval to ignore support with low probability mass. The authors show the effectiveness of this approach extensively using simulated and real data. Strengths: The paper is very well written and discusses the motivation and approach clearly. The problem addressed in the paper is very relevant as robust estimation of performance metrics is crucial for better comparing different approaches. The paper presents several interesting simulated and real examples to elaborate the method. Weaknesses: Although the estimation of confidence interval to remove 'noisy' samples is interesting, it would be great to see a discussion on estimating this uncertainty at individual points separately than working with the worst uncertainty, i.e., the inf norm over the kernel density. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Figure 1b: In the persistence plot, what does the shaded area imply? Can more blue points be added from the example on the right showing where in the persistence plot they land? Should the birth in this case always be at zero? - Line 128: while c_X and c_Y are referred to as bootstrap bandwidth, is there a link between them and h_n and h_m, i.e., bandwidth of the kernel density estimate? - Line 261: what does "swapping the position between Xi and Yj" imply? - Line 275: sensitiveness or sensitivity? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Limitation of the method has not been discussed in details. More discussion on the computational complexity will be helpful. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Reviewer 2 **R2-1. Shaded area in Figure 1b:** For the sake of clarity in our explanation, it would be beneficial to provide a concise description of our Persistence Diagram (PD). At first glance, our PD might appear to be a swapping of the x-label and y-label, a confusion commonly associated with the PD derived from the kernel density estimator (KDE). **This confusion stems from the fact that the PD for KDE is generated through a 'super-level' filtration, distinct from the more typical 'sub-level' filtration used for most PDs, such as a Vietoris-Rips PD**. This distinction arises because KDE assigns high values to data points, making a 'decreasing' filtration value more appropriate. As a result, in this context, homological features are born at high values and die at low values, leading to a birth > death relationship. In particular, the PD for KDE almost never has homological features with birth = 0, while the PD for Vietoris-Rips has a lot of 0-dimensional features with birth = 0. To ensure comparability with the visualization of the PD from sub-level filtration, it has become customary to position death on the x-axis and birth on the y-axis. The shaded area corresponds to the orange area and the light green area in Figure A1. The orange area is where homological features have their birth < $c_\alpha$ and death < $c_\alpha$, and they usually correspond to outliers. The light green area is where homological features have their birth > $c_\alpha$ and death > $c_\alpha$, and they usually correspond to homological insignificant features lying within the estimated support. For either case, the shaded area corresponds to homological features that are statistically insignificant, and can be either (1) from noise, or (2) from signal but not eminent enough compared to the sample size. See Section C for a more detailed discussion. &nbsp; **R2-2. link between $c$ and $h$:** As in Algorithm 1 in **Section H.2**, the kernel bandwidth $h$ performs a function akin to the parameter $k$ employed in P&R and D&C. Notably, the parameter $k$ can influence the inherent breadth of the support, thereby affecting its scope. Employing a KDE that incorporates this parameter, the probability values of data features are measured. Subsequently, during the support estimation process, the probability values of features are used to approximate the bootstrap bandwidth ($c_X$ or $c_Y$). Thus, there exists a link between the kernel bandwidth $h$ and the bootstrap bandwidth, as they are intertwined in the approximation process. &nbsp; **R2-3. "swapping the position between $x_i$ and $y_j$":** This means the exchange of randomly selected datapoints $x_i\in\mathcal{X}$ and $y_j\in\mathcal{Y}$, such that each is transferred to the dataset to which other belongs (i.e., $x_i\in\mathcal{Y}$ and $y_j\in\mathcal{X}$). &nbsp; **R2-4. Sensitiveness or sensitivity?:** Thank you for your comment and we have revised that part. &nbsp; **R2-L1. Computation cost:** (Table B2) We have relatively similar computational complexity. We analyze the computational complexity and time consumption in Table B2 in the pdf we provide at this rebuttal. &nbsp; **R2-W1. Discussion on estimating this uncertainty at individual points separately:** Great point. As the reviewer pointed out, localizing the uncertainty at individual data points separately is an interesting direction of work, and it will be more sample efficient and disregard fewer data points. However, considering that we are preserving the topological signals, such a direction of localizing the uncertainty is almost hardly possible given the state of the art. For example, it is possible to localize the uncertainty of KDE $\hat{p}$, since the functional variability is local; to control the uncertainty of $\hat{p}$ at some point $x$, (roughly saying) we just need to analyze the function value at that point $\hat{p}(x)$. However, to localize the uncertainty of the homological feature lying at some point $x$, we need to analyze all the points that the homological feature lies on; hence even if we only want to control the uncertainty at a local point, it necessitates controlling the uncertainty on the global structure. This makes localizing the uncertainty for topological features difficult given the tools in topology and statistics we currently have. We have incorporated this discussion into the limitations section.
Rebuttal 1: Rebuttal: We deeply appreciate your feedback. Your insights greatly helped us to refine our presentation and better illuminate the key advantages of our metric. Your input has also greatly enhanced our discussion about the metric's limitations, which not only clarifies our current work but also highlights promising future directions. **Reviewers can find the additional experiments we conducted in the attached PDF file: Table B1\~5 and Figure B1\~5.** &nbsp; **G1. Detailed info. of Table 1 & Scalability to large-scale data with more recent generative models (ADM, StyleGAN-XL)** **G1-1.** All the evaluation in this paper was conducted using **10k real and 10k fake samples**. **G1-2. Model ranking experiment (Table 1):** The models were trained on **CIFAR-10**. Here, our aim was to highlight the consistency of TopP&R across different embeddings and its congruence with widely accepted metrics in the field, FID. Based on the reviewer's recommendation, we additionally included **KID (Table B1)**. A foundational aspect of an evaluation metric is its capacity for stability and consistency. Ideally, the ranking it provides for different models should remain largely stable, irrespective of the embedding network used. Our results show that TopP&R consistently outperforms in terms of stability, as evidenced by the MHD values across different embedding networks (TopP&R: 1.33, P&R: 2.66, D&C: 3.0, MTD: 3.33). Furthermore, as generative models have developed with an emphasis on aligning with FID's (or KID’s), we posit that the rankings determined by FID (or KID) can, to some extent, reflect the true performance hierarchy among models, even if it doesn't capture it perfectly. Our results show that TopP&R displays the strongest alignment with FID and KID, which resonates with the historical trajectory of model development. **G1-3. Ranking SOTA models (Table B3):** We evaluated generative models trained on **ImageNet**, including **ADM and StyleGAN-XL**. The results consistently show that TopP&R maintains stable rankings across various embeddings and aligns most closely with FID and **KID**. **G1-4. Mode dropping on Baby ImageNet (Figure B1):** In contrast to the CIFAR-10 outcomes where TopP&R produced results comparable to other metrics, **with the large-scale dataset**, both the sequential and simultaneous mode dropping scenarios highlight **TopP&R's precise assessment, which is in line with our observations from the toy experiments**. Conversely, P&R had difficulties in accurately evaluating the simultaneous mode drop. &nbsp; **G2. Random projection (RandProj)** **G2-1. The impact of RandProj (Table B4, Figure B3 and B4):** To discern if TopP&R's enhanced performance stems from RandProj, we applied RandProj to both P&R and D&C. This led to inconsistent results in model ranking (Table B4), suggesting that **the performance of TopP&R is attributed to its distinct features, rather than just the RandProj**. This trend was further observed in the "shifting the generated feature manifold (Section 5.1.1)" (Figure B3), and the "Tolerance to Non-IID perturbations (Section 5.1.3)" (Figure B4). Both experiments were conducted with and without RandProj. Across these studies, a consistent theme emerged: both P&R and D&C failed to exhibit a stable evaluation trend in relation to noise, irrespective of the use of RandProj. **G2-2. The effect of dimensionality (Figure B5):** We examine the **impact of changing the dimension of RandProj** on TopP&R. We employed the same experimental setup as the "Mode dropping experiment (Figure B1)" and observed how the evaluation trend of TopP&R changes by altering the dimension of RandProj. By Johnson-Lindenstrauss Lemma (Johnson et al. 1986), it is known that RandProj preserves the topological structure of data, given the output dimension isn't excessively reduced (see Section G.3). When we use higher dimensions than 32, our results show that the trend of TopP&R stays consistent and stable. However, using higher dimensions leads to greater computational overhead. Thus, for pragmatic computation purposes, we opted for a dimension of 32. &nbsp; **G3. Choosing parameters of TopP&R** **G3-1. Confidence level of bootstrap bandwidth ($\alpha$):** $\alpha$ is not a typical tuning hyperparameter, but it holds a statistical interpretation of the probability or degree of confidence in accommodating errors, noise, and similar factors. The most commonly chosen values include $\alpha=0.1, 0.05, 0.01$, corresponding to confidence levels of 90%, 95%, and 99% respectively. In our experiments, we consistently employed $\alpha = 0.1$. **G3-2. Kernel bandwidth ($h$):** $h$ serves to adjust the initial KDE probability values of features. $h$ being too small leads to the KDE with unstable topological features, and $h$ being too large leads to the KDE with its topological features dissolved, so $h$ should be properly chosen. Ideally, $h$ can be chosen using topologically significant features, but this incurs computational challenges. Instead, in Algorithm 2 of Section H.4 we use a heuristic and automatic approximation of $h$ that effectively adapts to various embeddings and arbitrary datasets. From various experimental results, this $h$ performs effectively in practice. &nbsp; **G4. Different methods rather than KDE** Please see our Remark 4.4. The super-level set of KDE excludes topological noise by the KDE filtration: a confidence band establishes a statistical threshold for distinguishing between signal and noise in the KDE filtration. However, if we switch from using the super-level set of KDE to other methods like k-NN or any alternative, the estimated support is not a result of KDE filtration. Consequently, the topological noise present cannot be assessed through KDE filtration, and the confidence band cannot be applied to determine signal or noise in support. Consequently, the guarantee of excluding topological noise is lost when using different methods; this is not known yet. Pdf: /pdf/34ef65e32cbdac14c9a638cc0a39860cd9d7ff0d.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper addresses the need for more reliable evaluation metrics for deep generative models. The existence of noise in real data, such as mislabeled data and adversarial examples, affects the reliability of existing metrics and may lead to false impressions of improvement when developing generative models. To overcome these issues, the paper proposes a new evaluation metric based on the idea of Topological Data Analysis (TDA) and statistical inference. The proposed metric uses persistent homology from TDA and bootstrap bandwidth with confidence level to distinguish between significant patterns (signal) and noise in the data, providing a more robust and reliable measure of generative model performance. Strengths: Given the recent substantial advancements in generative models, having a metric that is robust and reliable under various real-world noises holds significant importance. The authors present an approach that aims to overcome the limitations of previous metrics, using straightforward yet well-founded methods. Additionally, the paper introduces theoretical properties that show the metric's consistency in the presence of adversarial noises. The paper is well-written and easy to follow. Weaknesses: 1. If I understand correctly, the authors use random linear projection only for the proposed metric, which might be unfair since it is unclear whether the improvements in robustness to outliers or noises stem from the low-dimensional space or the proposed methods. A comparison with previous methods operating in the same low-dimensional space would be more comprehensive. 2. The proposed metric's algorithm relies on multiple iterations for KDE filtration, raising questions about its practicality. It would be beneficial to include a comparison of the computational cost and overall runtime between the proposed metric and previous ones. Gaining insights into its efficiency compared to existing metrics will be valuable for both potential users and researchers. Additionally, it is essential to consider whether the metric can efficiently scale to handle a large number of datasets. However, the paper does not seem to provide details about the number of datasets used in the experiments, which leaves uncertainty about its scalability. 3. In section 5.1.2, the authors assert that precision values smaller than 1 indicate a problem. However, this lacks sufficient justification especially with a small number of samples. Additionally, the behavior of TopP for simultaneous mode dropping in Figure 3 exhibits worse instability compared to other metrics. 4. In the experimental results for mode dropping experiments with CIFAR-10, TopR does not exhibit consistent behavior with the toy experiments in the main paper (Figure 4) and sometimes performs worse than Recall and Coverage. 5. In section 5.2.3, the authors argue that the proposed metric is more sensitive to different levels of diverse noise, similar to FID. However, the results in Figure 6 do not appear to provide compelling evidence for this claim, except for the Gaussian noise case. Notably, in the salt and pepper noise case, TopPR, PR, and DC exhibit equal behavior, and in the black rectangle scenario, TopPR performs worse than PR. To strengthen the authors' claim about the proposed metric's sensitivity to diverse noise levels, additional explanation and experiments are necessary. 6. The sole reliance on FID as the ground truth ranking for models in section 5.2.4 might be insufficient and unconvincing. It is not clear how much relying on FID as GT is reliable. Including more widely used metrics, such as KID or human evaluations, would enhance the credibility of the result. However, even with the FID, the results do not seem to show the clear superiority of the proposed metric. Additionally, the results with HD across different embeddings also raise questions about the effect of the random projection layer, as other metrics do not utilize such a layer. 7. Some statements in the paper are unfounded. For instance, the rationale behind the ideal metric returning zero (line 263) is not clear. Moreover, the assumption in line 278 ("ground truth diversity should linearly decrease") is not correct because the supp(Q) remains the same with simultaneous mode dropping. In that sense, TopR works the worst of all. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. This paper employs the median of pairwise k-nearest distances between samples to estimate fixed-width kernels for KDE. I am curious whether this method is as effective as using sample-adaptive varying kernels, especially for datasets with multi-modal or long-tailed distributions. Adaptive kernels that adjust to local data characteristics might offer advantages in capturing complex and diverse patterns. 2. Concerning the experiments presented in Table 1, which dataset was used for these experiments? Can the proposed metric demonstrate effectiveness when applied to large-scale datasets, such as ImageNet? 3. There are some recommended experiments that are missing in the paper but would strengthen the effectiveness of the proposed metric: - Evaluating metrics on Diffusion models such as ADM or EDM, which are the leading and prominent advancements in generative models, would be beneficial to demonstrate the practicality of the proposed metrics. - Experiment for being robust to outliers in real-world dataset as in D&C paper. - Ablation study regarding the hyperparameters alpha and h for real-world datasets. 4. To enhance the readability of the paper, the captions of figures should be more detailed. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Limitations are discussed in the supplement. Potential negative social impact is unlikely for a paper comparing distributions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Reviewer 1 **R1-1. Adaptive kernels.** (G4) &nbsp; **R1-2, W2.** **The dataset utilized in Table 1:** (G1-2) CIFAR-10. In the revised version, we have mentioned this in the caption. **Effectiveness on a large-scale dataset:** (G1) Yes, it scales well. For all the experiments, we used 10k real and 10k fake samples, a standard choice for most of the evaluation metrics. We have added this detail to our paper. **Computational complexity:** (Table B2) TopP&R takes approximately 1 min 58 seconds of wall clock time for a single measurement on 10k samples, showing an efficient computational speed for model comparison. We have added this to our supplementary material. &nbsp; **R1-3.** **The outlier experiment similar to D&C:** (Section 5.2.2 and Figure 5) Please note that it has already been conducted in a completely identical manner, using outliers and inliers from the FFHQ dataset. **Comparison of state-of-the-art generative model** (G1-3) **Ablation study concerning hyper-parameters:** (G3) &nbsp; **R1-4. Improve captions:** Thank you for bringing this to our attention. In our revised paper, we have now updated the captions. &nbsp; **R1-W1, W6. Add KID:** (G1-2, G1-3), **Impact of random projection:** (G2) &nbsp; **R1-W3. Justification to precision values smaller than 1 indicate a problem:** Consistent with our results, Naeem et al. (2020) noted that precision does not approach 1, even when comparing two sets of samples from the same distribution, **irrespective of the sample size**. This makes determining a meaningful precision value difficult when assessing actual generative models. While TopP may exhibit cases where it does not converge to 1, it still shows a consistent evaluation trend, and it is difficult to argue that TopP values less than 1 significantly differ from the values that should converge. The key aspect of this experiment is the sensitivity of TopR to small changes in the mode. As the reviewer pointed out, even if TopP shows minor variations in this experiment, we believe that in many of our other experiments, TopP&R demonstrated more advantages than drawbacks. &nbsp; **R1-W4. Mode dropping experiments on CIFAR-10 are not strong:** (G1-4) We believe that the reviewer is referring to Figure 3 (not Figure 4). We agree with the reviewer that, compared to the results of Figure 3, it is hard to say that TopR is clearly better than the others in CIFAR-10 experiments. To further investigate the differences between TopP&R and the existing metrics, we conducted experiments using a large-scale dataset, baby imagenet, which has higher image resolution and diversity than CIFAR-10. In this large-scale dataset, TopP&R and the other metrics exhibit similar trends as seen in the toy data experiment–TopP&R shows sensitivity to distribution changes and exhibits precise responses. &nbsp; **R1-W5. The authors argue that the proposed metric is more sensitive to different levels of diverse noise, similar to FID:** We wish to clarify that we did **not** assert that TopP&R is "more" sensitive to varying levels of diverse noise compared to all other metrics. Our comparison was limited to MTD, a topology-based method like ours, but one that exhibited inconsistent reactions to the distortions. Other than that, as it can be seen in Section 5.2.3, we simply claimed that TopP&R effectively captures the various levels of distortion introduced to the images. Our objective was to demonstrate that TopP&R offers a consistent evaluation across different noise levels. We did not intend to suggest that our metric shows the best response. &nbsp; **R1-W7. The rationale behind the ideal metric returning zero (line 263):** As the reviewer pointed out, it is indeed appropriate to modify the statement in line 263 to “has low values” (not zero). **Assumption in line 278 ("ground truth diversity should linearly decrease") is not correct:** It is correct. The $supp(Q)$ shrinks as the number of samples of each mode simultaneously decreases. This is because the samples are from the Gaussian distribution of high dimensional space, where they are sparsely distributed. Hence the deletion of the samples actually results in shrinking the support, which leads to a linear decrease in ground truth diversity. This experiment aligns with Naeem et al.’s (2020) work, aiming to achieve an ideal linear response to simultaneous mode drop. The experiment in Section 5.2.1 starts with identical $supp(P)$ and $supp(Q)$ and progressively reduces samples of nine modes within $supp(Q)$ in a linear fashion until these nine modes vanish, while replenishing the first mode with the decreased number of samples. We would like to clarify that there might have been a misunderstanding regarding this aspect. Although the total sample count within $supp(Q)$ remains unchanged, the distribution itself experiences the linear disappearance of nine modes, ultimately leaving only one mode. Thus, in terms of diversity, the reduction indeed follows a linear trend. Therefore, it is crucial for the metric to capture the linear reduction in diversity within $supp(Q)$. The results in G1-4 of general comments further confirm that TopP&R responds most effectively to such linear trends. --- Rebuttal Comment 1.1: Comment: R1-W7: Do you mean that supp(Q) changes depending on the number of samples? Is it in accordance with the definition of supp() in line 74? I believe the support of a probability distribution Q should depend on Q itself, not the observations. In the experiment, we precisely know the probability distributions, and their supports are not changed. --- Reply to Comment 1.1.1: Comment: We are sorry for the confusion, and as the reviewer pointed out, the ground truth support $supp(Q)$ remains unchanged regardless of the samples, provided that we use the definition supp(Q) as it is. However, what we meant by "the $supp(Q)$ shrinks" indeed contains subtle implications that were not possible to be precisely explained in the previous rebuttal due to the word limit. So we will explain here: When we think of what should be the "ideal ground truth diversity", for most of the scenarios it should be the population recall, i.e., $recall_{P}(Q)=P(supp(Q))$. This is because we usually think of the ideal scenario as the asymptotic case, i.e., $n, m \to \infty$, and then the data approximately give the full information as the distribution. For this case, we have already shown that TopP and TopR are consistent with precision and recall, respectively, in Proposition 4.1 and Theorem 4.2. However, the scenario in 5.2.1 is a little different: for this case, technically speaking, $supp(P)=supp(Q)=\mathbb{R}^{d}$, so diversity would be always $1$, but this would not be the desired result. In detail, although the total sample size is the same, the sample sizes of nine modes are simultaneously decreasing. So for the data belonging to these 9 modes, it is equivalent to saying that $m \to 0$. When we have a small sample, and when the support is high dimensional, it is impossible to recover the support from the sample. This is because filling the high dimensional support with the data is impossible. Hence, not only TopP&R but also with any estimator of support, it is not possible to recover the full support $supp(Q)$. With the sample size $m$, what we can recover at most is the partial support $\{q>C_{m}\}$, where $q$ is the density of $Q$ and $C_{m}$ is a sequence that decreases as $m$ increases. Hence, as the sample sizes of nine modes are decreasing, "the partial support" $\{q>C_{m}\}$, in the sense of the ideal support that we can recover with the given number of data, shrinks. This is what we precisely meant by "the $supp(Q)$ shrinks". And then the "ideal ground truth diversity", in the sense that any good estimator of the diversity should approach, also correspondingly decreases. This “ideal ground truth diversity” also coincides with the recent trend in generative models. As mentioned by Han et al. (2022), recent generative models aim to adequately generate a sufficient number of diverse samples within each mode of the real data distribution. In this regard, an evaluation model that fails to accurately capture the reduced sample size of mode does not align with current trends in generative model research. Developing a metric that accurately captures this tendency is very important. The aforementioned “ideal ground truth diversity” aligns well with the recent research direction, and so is TopP&R. TopP&R becomes an indispensable tool for evaluating the ability of newly developed generative models to generate samples diversely and accurately under statistical confidence. We will clearly elucidate how the ideal ground truth diversity should behave and the connection to the recent research trend into our paper. &emsp; - Han, Jiyeon, et al. "Rarity score: A new metric to evaluate the uncommonness of synthesized images." arXiv preprint arXiv:2206.08549 (2022).
null
null
null
null
null
null
Effective Targeted Attacks for Adversarial Self-Supervised Learning
Accept (poster)
Summary: This paper highlights that untargeted attacks for adversarial self-supervised learning result in poor downstream robustness. Instead, they suggest utilizing targeted attacks and propose a scoring method for selecting the target samples. They show that this approach results in significant robustness improvements over the untargeted attacks. Strengths: 1. The paper highlights a major issue with adversarial self-supervised learning for non-contrastive models, that the untargeted attacks typically used fail to deliver good downstream robustness. 2. The proposed method shows significant robustness improvement when compared to positive-only untargeted attack. 3. The paper is well-written and the definitions and argumentation are easy to follow. Weaknesses: 1. The proposed method, TARO, has two components but their individual contributions are not studied. TARO improves over the positive-only unsupervised attack by i. changing it to a targeted attack and ii. by selecting the target using a score function. However, there are indications in the prior work and in the results in this paper that the majority of the improvement comes from i. and not ii. For example, (Petrov and Kiwatkowska, 2022) also compare targeted and untargeted adversarial self supervision and show that targeted attacks across the whole batch result in more certifiably robust models than the untargeted ones. The fact that the improvements in Table 4 in this paper are significantly lower than the improvements in Table 3 is further evidence that it might be i. that drives the improvement rather than ii. Hence, the authors should consider adding an additional baseline to all their results where instead of using their score function, they select a random sample from the batch. This would also be computationally easier, as the paper reports a 5% overhead for the target selection. 2. The paper does not address the fact that the training loss and the loss used for computing the adversarial samples need not be the same. In fact, a number of prior works have this precise setting, e.g., (Alayrac et al., 2019), (Nguyen et al., 2022). Hence, it is not particularly novel that the loss used for training a positive-only model can be different from the one used for the attacks. 3. Only RoCL and ACL are compared against the novel method, but a number of other unsupervised adversarial training methods have been proposed as listed above. In order to claim that the target selection via a score function is better than them, the authors should compare with a larger range of methods. In my view, it is especially important to compare against methods that generate adversarial examples using the whole batch, rather than a single target as these methods might be stronger than the proposed TARO. Some examples of such prior work are (Ho and Vasconcelos, 2020), (Fan et al., 2021), (Petrov and Kwiatkowska, 2022) and the Adversarial-to-Adversarial and Dual Stream methods by (Jiang et al., 2020). 4. In the “Visualization of embedding space” paragraph and Fig. 3, it is actually not that clear whether the targeted attacks generate more samples on the boundary than the untargeted attacks. Visually the two plots seem quite similar. References: Jean-Baptiste Alayrac, Jonathan Uesato, Po-Sen Huang, Alhussein Fawzi, Robert Stanforth, and Pushmeet Kohli. Are labels required for improving adversarial robustness?, 2019 Chih-Hui Ho and Nuno Vasconcelos. Contrastive learning with adversarial examples, 2020 Ziyu Jiang, Tianlong Chen, Ting Chen, and Zhangyang Wang. Robust pre-training by adversarial contrastive learning, 2020 Lijie Fan, Sijia Liu, Pin-Yu Chen, Gaoyuan Zhang, and Chuang Gan. When does contrastive learning preserve adversarial robustness from pretraining to finetuning?, 2021 A. Tuan Nguyen, Ser Nam Lim, and Philip Torr. Task-agnostic robust representation learning, 2022 Aleksandar Petrov and Marta Kwiatkowska. Robustness of Unsupervised Representation Learning without Labels, 2022 Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: 1. I did not understand the “Analysis of the selected target”. What is the AT model used? How was it trained? And why was an AT model preferred to a standardly trained one? It is also not particularly clear what Fig. 2a shows. Perhaps more clarity in the writing could help. 2. The notation in the “Similarity and entropy-based target selection for targeted attack” paragraph is confusing. $p’$ and $p$ are used in the equations but $p_i$ is defined below. Is $S_\text{entropy}$ a vector or a scalar? If $p$ is a vector of logits then isn’t $S_\text{entropy}$ a vector? Typo: - The sentence starting on line 361, an unnecessary “are” and “belonging” should be “belong”. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 4 excellent Contribution: 3 good Limitations: As mentioned in Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weakness 1:** Individual contributions of two components (i. changing it to a targeted attack and ii. selecting the target using a score function.) are not studied. Lower improvements in Table 4 compared to Table 3 suggest that [i] might be driving the improvement, rather than [ii]. **Response:** **There seems to be a critical misunderstanding of our work.** Our approach consists of **two interrelated components**, not individual ones: [i] changing to a targeted attack and [ii] selecting the target using a score function. It's important to note that component [ii] is not separate but rather boosts the performance of component [i] (Line 245-251). * [i] is proposed based on the theoretical motivation of Theorem 3.2, which suggests that targeted attacks enlarge the scale of attacks (Line 230). Empirically, we demonstrate the effect of the first component in Table 2. * [ii] is introduced to find a more effective target rather than random instances, enhancing the robustness of the self-supervised models. As shown in Table 3, the model can achieve improved robustness due to selecting a more effective target for the targeted attack in Equation 9 (Line 227). We summarize the tables in the below. |Method|Component|Clean|PGD| |-|-|-|-| ||-|71.78|32.28| |SimSiam|[i]|73.25|42.85| ||[i]+[ii]|**74.87**|**44.71**| * This improvement also holds true in contrastive-based SSL, such as RoCL and ACL, as indicated in the following table. * Specifically, random targeted attacks boost the performance of the original RoCL and ACL. When we employ the score function to select a more effective target for the targeted attack, the model can achieve even better robustness. |Method|Attack|Target Selection|Clean|PGD| |-|-|-|-|-| |RoCL|Untargeted|-|78.14|42.89| ||Targeted|Random|79.26|43.45| |||Score function|80.06|45.37| |ACL|Untargeted|-|79.96|39.37| ||Targeted|Random|73.25|42.85| |||Score function|78.45|39.71| * The reason that the improvements in Table 4 are lower than those in Table 3 is **due to the different frameworks employed.** * Table 3, based on positive-pair only SSL frameworks like BYOL and SimSiam, contrasts with Table 4, which shows results from a contrastive learning framework like SimCLR. BYOL and SimSiam benefit more from TARO due to their reliance on positive pairs (Line 312-316). * The contrastive-based approach SimCLR diminishes TARO's effect by using both positive and negative pairs in representation learning, reducing the positive pair effect to 1/batch size in loss objectives (Line 321-323). --- **Weakness 2:** The paper does not address the fact that the training loss and the loss used for computing the adversarial samples need not be the same. **Response:** This is a critical misunderstanding of our contributions. Our primary contribution is not proposing a different attack loss compared to the training loss to gain robustness. * Instead, our novel contribution is that we first propose a targeted attack between the positive pair for adversarial self-supervised learning. This proposal is based on theoretical motivation (Theorem 3.2), where a targeted attack can generate stronger adversaries during adversarial representation learning (Table 2). * We summarize the tables in the below. |SSL|Attack Type|Clean|PGD| |-|-|-|-| |SimSiam|Untargeted|66.36|36.53| ||Targeted|77.08 **(+10.02%)**|47.58 **(+11.05%)**| * We propose to **switch from an untargeted attack to a targeted attack** between the original image and the selected target image, i.e. positive-pair images, as outlined in the following equation. > Untargeted attack $\delta^{t+1}=\Pi_{B(\epsilon)}(\delta^t+\alpha sign(\nabla_{\delta}L(t1(x_i), t2(x_i)))$ \ > Targeted attack $\delta^{t+1}=\Pi_{B(\epsilon)}(\delta^t+\alpha sign(\nabla_{\delta}-L(t1(x_i), t2(x_j)))$ --- **Weakness 3:** Limited comparisons. **Response:** Thank you for the related work suggestions; they will be included in our revision. Since our method suits any adversarial SSL framework using positive-pair images, we applied the TARO method to recent work (DynACL) and confirmed TARO's effectiveness. * Please note that since [1] did not provide any official code or checkpoint, we were unable to evaluate this approach against our evaluations. However, we will discuss it in the related work section. |Method|Clean|PGD| |-|-|-| |DynACL|78.56|46.10| |DynACL+TARO|**78.83**|**46.79**| --- **Weakness 4:** It is not that clear whether the targeted attacks generate more samples on the boundary than the untargeted attacks. **Response:** (Figure also can be found in the PDF file) As depicted in Figure 3 (a), some of the dark blue square dots are positioned near the yellow and orange clusters, but approximately **half of the instances are found within the blue cluster.** These particular instances may not effectively contribute to improving robustness. However, in contrast, Figure 3 (b) shows that **most of the dark blue triangles are aligned along the outline** of the blue cluster. --- **Q1. About “Analysis of the selected target”.** \ **R.** We train the AT model, a supervised adversarial training model, with [1]. To identify more attackable classes in robust representation, we use the AT model instead of a standard one. In Figure 2(a), we find that the "ship" class is most susceptible to "plane" image attacks. Our analysis in Figure 2(b) shows that our target selection algorithm often chooses "ship" images as targets for targeted attacks, in line with our aim to create stronger attack images. We will revise the paragraph more clearly. [1] Towards Deep Learning Models Resistant to Adversarial Attacks **Q2. About Equation $S_{entropy}$.** \ **R.** Sorry for the confusion; the symbol $pi$ is a typo, and it should be $p$, where $p=h\circ g\circ f(x)$. Here, p is a vector of logits, and we calculate the entropy which is scalar by summing up the values along the corresponding dimension as follows: $S_{\text{entropy}}(x,x') = \sum \frac{p'}{\tau} \log(\frac{p'}{\tau})$ --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response and for providing the additional results. They do indeed show that the target selection increases the resulting robustness a bit over using random selection. I'd recommend adding these results to the manuscript. I have one outstanding question about __W1__ though: You say _Specifically, random targeted attacks boost the performance of the original RoCL and ACL. When we employ the score function to select a more effective target for the targeted attack, the model can achieve even better robustness._ but that doesn't seem to be true for ACL. The robust accuracy is lower there for the score function compared to the random selection. How do you reconcile this? --- Reply to Comment 1.1.1: Title: Apologize for the typo Comment: Thank you for your comments. We apologize for the confusion. There appears to be a typo regarding the performance of 73.25/42.85. This performance actually corresponds to the random selection performance of SimSiam as referenced in the first table (second row). The correct performance is **78.31/39.74**. While the performance gain from the contrastive-based approach is not as significant as the positive-pair SSL framework, it's worth noting that the contrastive-based approach, SimCLR, mitigates TARO's effect by utilizing both positive and negative pairs in representation learning. This reduces the positive pair effect to 1/batch size in loss objectives (see Line 321-323). We appreciate your feedback. To provide clarity, we will revise the table in our initial response as soon as you find our corrections. Moreover, we will certainly adding these results in the manuscripts. Best regards, \ Author
Summary: This paper proposes a new adversarial attack on self-supervised learning methods which is called TARO. The method performs an attack "on the positive-pair that perturb the given instance toward the most confusing yet similar latent space, based on entropy and similarity of the latent vectors." Authors show that this attack can be used to improve adversarial robustness of the underlying semi-supervised learning framework. Strengths: - The experimental evaluation shows that TARO can be used to improve adversarial robustness of positive-pair-only self-supervised learning approaches (SimSiam and BYOL). - The paper is mostly well-written and comprehensible. It also gives a useful overview of the necessary backgrounds. Weaknesses: - Mathematical notation can be improved at some points: 1. In Section 3.1 the authors state that the dataset $\mathcal{D} = \\{X, Y\\}$ is a set of the training data and the corresponding labels. This does not make sense: $\mathcal{D}$ has to be a tuple and X and Y also need to be ordered sets. 2. For me it is not clear what is meant by ${z_{pos,neg}}$ in Equation 2. The used notation suggests that it is a set containing a single element. However, that does not make sense as the sum in the denominator should go over all positive and negative samples. 3. In Equation 3 it is not specified at all over which entries the sum is calculated. - The number of evaluated attacks is too small. The authors only compare against PGD and Autoattack. It might be interesting to compare against other attacks, e.g. Carlini-Wagner or black-box attacks. - Theorem 3.2 states that "...adversarial perturbations are *likely* to be larger than...". However, the next sentence states that this is always the case. I suggest that the authors reformulate that sentence. - Figure 1 is difficult to understand. I had to read some more sections first to understand that the positive-pair only attack is proposed in the paper. Maybe, the term should be mentioned earlier (perhaps even in the abstract.) Minor comment: - Using verbatim in formulars is quite uncommon. I suggest the authors use \text{}, but this is a matter of taste. - "positive-pair only" vs. "positive-pair-only" Overall, I see merit in this submission and willing to recommend acceptance after the weaknesses mentioned in my review were addressed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How improves adversarial training using TARO robustness against other attacks, e.g. Carlini-Wagner or black-box attacks? - Are there any other SSL methods besides BYOL and SimSiam which could benefit from TARO? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are not discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive comments on our work, that our work **highlights the improvement of robustness** in the positive-pair-only self-supervised learning approach, **acknowledges the well-written and comprehensive** paper, and **has merits**. ----- In the following response, we have done our best to resolve all the concerns that you raised in the weakness section. Please find our detailed response below, and if there are further concerns about our work, do not hesitate to share your comments. We would be delighted to address any additional questions or concerns you may have.\ Thank you again for your valuable comments and thoughtful review. --- **Weakness 1.** Mathematical notation should be revised. * We will revise mathematical notation as follows. Furthermore, we will also go over other parts to improve the clarity of the mathematical notations. 1) In Section 3.1 the authors state that the dataset D={X,Y} is a set of the training data and the corresponding labels. * We will revise the definition of dataset as follows: dataset $D=${$(x_i,y_i)$} where $x_i \in R^D$ are input images and $y_i \in R^N$ are their corresponding labels from the N classes. 2) For me it is not clear what is meant by zpos,neg in Equation 2. * It is a typo, and we apologize for the confusion. {$z_{\text{pos}}$} and {$z_{\text{neg}}$} are the sets of the latent vector \(z\) of positive-pair and negative pairs, respectively. We will revise Equation 2 as follows: $L_{nt-xent} (x,\\{ x_{pos}\\}, \\{x_{neg}\\}) := -log \frac{\sum{z_p\in\\{z_{pos}\\}} (sim(z, z_p)/\tau) } {\sum{z_p \in\\{z_{pos}\\}} (sim(z, z_p)/\tau)+\sum{z_n \in \\{z_{neg}\\}} (sim(z, z_n)/\tau)}$ 3) In Equation 3 it is not specified at all over which entries the sum is calculated. * Equation 3 represents the objective of SimSiam [1], a positive-pair only SSL framework. The loss function calculates the cosine similarity between the positive pairs, which are originally the same instance but augmented differently. For clarification, we will revise the Equation as follows: $L_{ss}(x, x_{pos})$=$L_{t_1(x), t_2(x)}=-\frac{1}{2}\frac{p_1}{||p_1||_2}\frac{z_2}{||z_2||_2}-\frac{1}{2}\frac{p_2}{||p_2||_2}\frac{z_1}{||z_1||_2}$ where $p_i =h \circ z_i$ and $z_i=g\circ f(t_i(x))$. [1] Exploring simple siamese representation learning., CVPR 2021 --- **Weakness 2./ Q1.** Evaluation against other attacks is needed. * Thank you for your comments. Following your suggestions, we tested our approach against the Carlini-Wagner (CW) attack [1], black-box attacks, i.e., Pixle [2], and Patch-attack, i.e., PIFGSM [3], as shown in the following table. Since our approach already showed improved performance against Autoattack, which includes black-box Square attacks, our approach is able to consistently demonstrates enhanced robustness against both the CW attack and AT black-box attacks. ||PGD|CW [1]|Pixle [2]|PIFGSM [3]| |-|-|-|-|-| |RoCL|42.89|76.45|67.32|43.23| |RoCL+TARO|45.37|72.75|68.40|44.56| |SimSiam|32.28|68.14|54.56|28.31| |SimSiam+TARO|44.97 |73.87|67.22|46.37| [1] Towards Evaluating the Robustness of Neural Networks, 2017 \ [2] Pixle: a fast and effective black-box attack based on rearranging pixels, ECCV2020 \ [3] Patch-wise Attack for Fooling Deep Neural Network, 2022 --- **Weakness 3.** Theorem 3.2 should be reformulated. * Thank you for your comments. We will reformulate Theorem 3.2 as follows to make the theorem more clear. > Theorem 3.2 **[Perturbation range of targeted attack]** Given a model trained under the $L_{\text{targeted-attack}}$ loss, the adversarial perturbations $\delta_{\text{targeted-attack}}$ are larger than the adversarial perturbations $\delta_{\text{ss}}$ from a model trained under the $L_{ss}$. Formally, $|\delta_{\text{targeted-attack}}|$ $>\|\delta_{\text{ss}}\|_\infty$. ---- **Weakness 4.** Figure 1 is difficult to understand and positive-pair only attacks should be explained earlier. * We will revise the abstract to include mention of the positive-pair-only attack for better readability. Furthermore, we have also revised Figure 1, providing clearer descriptions for the captions in the PDF. ---- **Minor comment:** We will revise verbatim in formulas and use \text{}. Moreover, we will use the consistent term “positive-pair only” in all manuscripts for better consensus of the term. Thanks for the comments! --- **Question.** Are there any other SSL methods besides BYOL and SimSiam which could benefit from TARO? * Our TARO can leverage any kind of SSL method that employs positive pairs to learn visual representation. Therefore, methods like BYOL and SimSiam, which solely rely on positive-pair-only frameworks, show larger benefits from TARO (Table 3). Furthermore, the contrastive-based approach SimCLR, which also employs positive pairs in representation learning along with negative pairs, also benefits from TARO as shown in Table 4. * Based on your question, we further examine the applicability of TARO by applying it to adversarial self-supervised learning in an additional self-supervised learning framework: Barlow Twins. Barlow Twins is a framework that learns visual representation using positive pairs and redundancy regularization. * Since there are no adversarial baselines based on Barlow Twins [2], these results may underfit due to a simple combination of [1] and Barlow Twins [2]. However, when we apply TARO to the adversarial Barlow Twins using the same hyper-parameters and the same settings, **it shows much better clean performance and robustness, as demonstrated in the following table**. This further demonstrates the applicability of our TARO on positive-pair of SSL frameworks. |Method|Clean|AutoAttack Robustness| |-|-|-| |[1]+Barlow Twins [2] | 61.94|11.71| |[1]+Barlow Twins [2] + TARO|75.37 **(+13.43)**|26.43 **(+14.72)**| [1] Kim et al., Adversarial Self-Supervised Contrastive Learning, NeurIPS 2020 \ [2] Zbontar et al., Barlow Twins: Self-Supervised Learning via Redundancy Reduction, ICML 2021 --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their rebuttal and appreciate the effort put into the additional experiments. Overall, most of my concerns have been adequately addressed which is why I increased my rating to 7 and presentation score to 3. However, in my optinion the notation used for the $L_\text{nt-xent}$ loss is still misleading as $\\{z_\text{pos}\\}$ refers to a set containing a single element $z_\text{pos}$ which is not what the authors intent to say. I urge the authors to properly define the set of positive and negative examples. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, We sincerely thank you for your feedback. Your positive comments and detailed concerns have significantly helped improve our work. In light of your feedback, we will clarify z_pos and z_neg to prevent any misunderstandings for the readers. We greatly appreciate the time and effort you've dedicated to reviewing our work, and your insights have been invaluable. Best regards, \ Author
Summary: The paper proposed the targeted attack for adversarial self-supervised learning to solve the suboptimal learning issue in positive pair-only self-supervised learning. To improve the robustness of adversarial self-supervised learning (SSL), the author leverages the targeted selection mechanism based on the score function and generate the samples by selected targets for adversarial training. The paper empirically validates that target attacks can improve the adversarial robustness of several previous adversarial SSL methods including BYORL and SimSiam. Strengths: The paper is well-organized and easy to follow. The proposed idea is clear and has good motivation. Weaknesses: For the transferable robustness part, the author mentioned that TARO can transfer robustness from one task to another. In my opinion, CIFAR100 and CIFAR10 share a similar domain and is not trivial to transfer the robustness. Although the authors evaluate on AutoAttack, which is a well-known strongest attack so far, one thing that I might be curious about is the transferability from the PGD attack (known attack) to other kinds of unknown attacks such as the Patch attack, CW_l2. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can the proposed method be applied to the transformer-based model, such as ViT? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes, the authors adequately addressed the limitation Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive comments on our work, which highlight good motivation clear idea with well-organized writing. --- In the following response, we have done our best to resolve all the concerns that you raised regarding our work. Please find our detailed response below, and if there are further concerns about our work, do not hesitate to share your comments. We would be delighted to address any additional questions or concerns you may have during the discussion phase. Thank you again for your valuable comments and thoughtful review. --- **Weakness 1.** CIFAR100 and CIFAR10 seem not trivial to transfer the robustness. * As you commented we also tested on the different domains transferring from CIFAR10 to STL10, the robustness in the following table. Consistently, our approach shows better-transferring robustness in non-trivial transfer tasks. * However, please note that transferring the robustness is one of the challenging tasks to achieve even from a similar domain from CIFAR100 to CIFAR10 [1]. Therefore, many previous works have conducted experiments on CIFAR100 to CIFAR10, and we followed the same setting as [1,2,3]. | CIFAR10 | target dataset| Clean | PGD $\ell_{\infty}$ | |--|:-------:|-----------|-----------| |RoCL|STL10|63.85|32.75| |RoCL + TARO|STL10|**67.30**|**32.90**| |SimSiam|STL10|30.22|12.66| |SimSiam + TARO |STL10|**54.45**|**33.44**| [1] Shafahi et al., Adversarially Robust Transfer Learning, ICLR 2020 \ [2] Kim et al., Adversarial Self-Supervised Contrastive Learning, NeurIPS 2020 \ [3] Fan et al., When does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning?, NeurIPS 2021 ---- **Weakness 2.** Attack transferability to CW or Patch attack. * Following your suggestions, we evaluated our approach against Carlini-Wagner (CW) attack [1], black-box attacks, i.e., OnePixel [2], and Patch-attack, i.e., PIFGSM [3], to demonstrate transferability of the attack types. * As shown in the following Table, our works are able to demonstrate the transferable robustness against diverse attacks in both contrastive-based TARO and positive-pair only based TARO. |Type|Method|PGD|CW [1]|OnePixel [2]|PIFGSM [3]|Average| |-|-|-|-|-|-|-| |Contrastive-based SSL|RoCL|42.89|76.45|67.32|43.23|57.47| |Contrastive-based SSL|RoCL+TARO|45.37|75.38|68.40|44.56|**58.43**| |Positive-pair only SSL|SimSiam|32.28|68.14|54.56|28.31|45.82| |Positive-pair only SSL|SimSiam+TARO|44.97|73.87|67.22|46.37|**58.11**| [1] Towards Evaluating the Robustness of Neural Networks, 2017 \ [2] One pixel attack for fooling deep neural networks, 2018 \ [3] Patch-wise Attack for Fooling Deep Neural Network, 2022 ---------- **Question. Can TARO be applied to ViT?** * Our work is a model-agnostic approach so that TARO can be applied to a transformer-based model. * We are currently running experiments on ViT with TARO. However, due to the computational costs, it will take a few days to obtain the full set of results, and we will soon update our table as soon as they are out. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response and effort. The authors' response has addressed most of my concerns. I would expect the experiment of ViT with TARO can also get better performance. Thus, I will keep my rating. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, We apologize for the delay in our response. We encountered challenges because there is no baseline research that has explored either self-supervised learning with a small dataset or adversarial training on ViT. We had to identify an approach that fits within our computational budget while thoroughly evaluating the effectiveness of our TARO. Kindly understand that due to computational constraints, our initial efforts did not involve extensive hyper-parameter tuning. To explain the settings we implemented, we adopted the self-supervised ViT approach as described in [1]. Building on [1] which is pretrained on ImageNet1K, we further trained the model for 15 epochs by integrating adversarial self-supervised training using both untargeted and targeted attacks on CIFAR10. Here's a summarized table of our results: |Model|Clean Accuracy|Robustness| |-----|--------------|---------| |ViT [1]+untargeted attack|66.63%|56.36%| |ViT [1]+TARO|74.01%|67.23%| As illustrated in the table, **TARO has also demonstrated effectiveness when applied to ViT** since TARO is a model-agnostic approach. [1] Emerging Properties in Self-Supervised Vision Transformers Best, \ Author
Summary: This paper investigates the problem of unsupervised adversarial training. It claims that previous non-contrastive, positive-only SSL frameworks suffer from ineffective learning with untargeted adversarial samples. Based on this, this paper proposes a new TARO paradigm, which conducts targeted attacks to select the most confusing but similar samples for guiding the gradients toward a desired direction. Experiments show some great results. Strengths: 1. The motivation of this paper is clear, which addresses the limitation so existing positive-only SSL frameworks. 2. The paper is well-organized and easy to read. 3. Supplementary file is provided. Weaknesses: 1. The novelty is incremental. Actually, the technical design of targeted adversarial SSL is too straightforward. Although many theorems are provided in the paper to support it, I don't think they are critical to the final model design. 2. Missing many related works. The references are out-of-data, they are all published before 2022. There are many recent papers of this topic, the authors should carefully revisit them and discuss with them. 3. The experiments are unconvincing. The baselines are old, the authors should conduct experiments with latest baselines. Moreover, the ablation study is also insufficient. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See above Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive comments on our contributions, which highlight clear motivation and address the limitations of existing positive-pair-only SSL frameworks in robustness. ---- In the following response, we have done our best to resolve all the concerns that you raised regarding our work. Please find our detailed response below, and if there are further concerns about our work, do not hesitate to share your comments. We would be delighted to address any additional questions or concerns you may have. Thank you again for your valuable comments and thoughtful review. ---- **Weakness 1.** The novelty is incremental. * We appreciate the feedback, but it appears there may be a fundamental misunderstanding of our work that requires clarification. * In Table 2 of our manuscript (also summarized in the following table), we observe that robustness in positive-pair only SSL is extremely vulnerable compared to success in adversarial contrastive-based SSL. The vulnerability in positive-pair only SSL can be attributed to the restricted range of perturbation, in contrast to that generated by contrastive-based SSL, as elucidated in Theorem 3.1. Based on this observation, we propose a simple yet effective approach wherein a targeted attack can be employed to augment the perturbation range, as outlined in Theorem 3.2. * We would like to underscore that, to the best of our knowledge, **the utilization of targeted attacks between two instances has not been previously explored** in the field of adversarial SSL. * **Our approach is founded on theoretical motivation** that suggests a targeted attack amplifies the range of attack strength in positive-pair only SSL (as detailed in Section 3, Lines 185-214). This method involves a methodical design that conducts targeted attacks between positive-pair instances to enhance the robustness of adversarial SSL. * Our novel score function proposal involves **selecting more strategic targets, rather than random instances, to increase robustness** in adversarial SSL, building on prior studies [1,2,3]. * Our empirical findings ensure the effectiveness of our score function design. As shown in the corresponding table, our score function outperforms random selection in terms of robustness, thereby substantiating the merits of our method. |Type|Method|Target selection|Clean|Robustness| |-|-|-|-|-| |Positive-pair only SSL|SimSiam|-|71.78|32.28| |Positive-pair only SSL|SimSiam+TARO|Random|73.25|42.85| |Positive-pair only SSL|SimSiam+TARO|Score function|74.87 **(+3.09)**|44.71 **(+12.43)**| * Our targeted adversarial SSL can be adapted to any kind of adversarial SSL framework including contrastive-based SSL and non-contrastive-based SSL, i.e., positive-pair-only SSL, which is especially effective in positive-pair-only SSL (Table 3). The summarized table is as shown follows: |Type|Method|Clean|Robustness| |-|-|-|-| |Contrastive-based SSL|RoCL|78.14|42.89| |Contrastive-based SSL|RoCL+TARO|80.06 **(+1.92)** |45.37 **(+2.48)**| |Positive-pair only SSL|SimSiam|71.78|32.28| |Positive-pair only SSL|SimSiam+TARO|74.87 **(+3.09)**|44.71 **(+12.43)**| [1] Mma training: Di- rect input space margin maximization through adversarial training. ICLR 2020 \ [2] Evaluating the robustness of geometry-aware instance-reweighted adversarial training. ICLR 2021 \ [3] Rethinking the entropy of instance in adversarial training., SaTML 2023 ----- **Weakness 2**. Many recent related works are missing. * Thanks to your comments, we will add recent references [4,5,6] in our paper including the publication in 2023. * Since our approach can be applied to various kinds of adversarial SSL, including the contrastive-based SSL approach, we applied TARO to DynACL [6]. As shown in the response to Weakness 3 below, experimental results show that our approach is also effective in [6]. If you have additional publications that need to be revisited in our paper, please let us know so we can include them in our comparison. ----- **Weakness 3.** Experiments are not convincing due to old baselines and insufficient ablation study. * As we mentioned in the previous response, we additionally conduct the experiment on top of the most recent work, i.e., DynACL'23 [6]. * As illustrated in the table below, TARO is also effective with DynACL, a recent variation of ACL that schedules the data augmentation strength during pretraining. Since TARO proposes to adjust the attack to be stronger than the original ACL attack, our approach is applicable and can demonstrate effectiveness with this recent variation of ACL, specifically DynACL. | | Clean | PGD| AutoAttack | |-|-|-|-| |DynACL|78.56|46.10|43.31| |DynACL+ TARO|**78.83**|**46.79**|**43.53**| * Furthermore, we conducted an ablation study on our score function to confirm the effectiveness of our selection algorithm within both the contrastive learning framework and the positive-pair only framework. Compared to random selection, our score function exhibits superior robustness in adversarial SSL, thereby underscoring the advantages of our approach. * Please let us know if any further ablation studies are needed to understand our approach. ||Target Selection|Clean|PGD| AutoAttack | |-|:-:|-|-|-| |SimSiam|-|71.78|32.28|24.41| |SimSiam+TARO|Random|73.25|42.85|34.72| |SimSiam+TARO|Score function|**74.87**|**44.71**|**36.39**| |RoCL|-|78.14|42.89|27.19| |RoCL+TARO|Random|79.26|43.45|27.24| |RoCL+TARO|Score function|**80.06**|**45.37**|**27.95**| [4] Xie et al., Adversarial examples improve image recognition, CVPR'20 \ [5] Fan et al., When does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning?, NeurIPS'21 \ [6] Luo and Wang et al., Rethinking the Effect of Data Augmentation in Adversarial Contrastive Learning, ICLR'23 --- Rebuttal Comment 1.1: Comment: Thanks for the authors' clarification. The authors' response has addressed most of my concerns. However, I still think the contributions of this paper are not enough to meet the standards of the Neurips conference. Therefore, I will keep my rating. --- Reply to Comment 1.1.1: Comment: Thank you for your comments and feedback. And we are glad that we resolve most of your initial concerns. However, we kindly seek clarity regarding the criteria by which our contributions did not meet the standards of the NeurIPS conference, especially since we have addressed most of the concerns you raised. Given our theoretical motivations and the results of our empirical experiments, we are confident that our work offers meaningful insights to the adversarial self-supervised learning community. Could you please elaborate on any remaining concerns about our contribution? We are willing to address them if possible. Best, \ Author
Rebuttal 1: Rebuttal: Dear Reviewers, We would like to thank you for the time and effort you've invested in reviewing our paper, and for the constructive feedback you have provided. During the initial response period, we did our best to address all the concerns you raised and to improve our paper according to your insights. We have responded to each of your individual comments. Furthermore, we have additional discussions and experimental results from various perspectives, in line with your suggestions. A brief summary of our responses is provided below for your convenience. We hope that our revisions have resolved your concerns, and kindly request you to consider reflecting these changes in your updated review scores. --- * Reviewer aDD2: Clarified our contributions and approach * Reviewer aDD2: Added experiments based on recent work (DynACL’23) * Reviewer aDD2: Conducted ablation experiments on the score function --- * Reviewer A38P: Added transfer tasks to the STL10 dataset * Reviewer A38P: Introduced additional adversarial evaluations against CW, Patch attack, and black-box attack --- * Reviewer zysz: Revised mathematical notations * Reviewer zysz: Updated Figure 1 and its descriptions (in PDF file) * Reviewer zysz: Included additional adversarial evaluations against CW, Patch attack, and black-box attack * Reviewer zysz: Experimented with different types of SSL frameworks, such as Barlow Twins, using our approach --- * Reviewer Uc5D: Conducted an ablation study on each component (i.e., targeted attack, target selection algorithm) * Reviewer Uc5D: Analyzed the ablation study of target selection algorithms in contrastive-based SSL frameworks * Reviewer Uc5D: Added comparison experiments based on recent work (DynACL’23) * Reviewer Uc5D: Added detailed description of Figure 2, 3 (in PDF file) Pdf: /pdf/0f9f39debd6452ad51a3926f38d002db1edf207b.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Foundation Model is Efficient Multimodal Multitask Model Selector
Accept (poster)
Summary: This paper proposed an efficient multi-task model selector (EMMS) to address the inapplicability in a multi-modal multi-task scenario. Specifically, the proposed method achieves a new state-of-the-art in performance and speedup through the incorporation of design elements such as the F-label, Weighted Linear Square Regression, Fast Computation by Alternating Minimization. Strengths: 1. The motivation for addressing “a unified representation to represent diverse label formats” is clearly presented and validated. And use foundation model is an impressive way. 2. The detailed derivation and experiment in this paper are comprehensive, providing strong evidence of the validity of the proposed model. Weaknesses: 1. It is recommended to include a complete demo in the code address of the paper, rather than a short python file. 2. The phrase "5.13×, 6.29×, 3.59×, 6.19×, and 5.66× speedup in wall-clock time" appears to describe the speed efficiency of the EMMS (One) method. Could you clarify why EMMS's performance varies, being faster than LogME in some instances and slower in others, as indicated in Table 1? 3. In Table 1, there seems to be some ambiguity about the effect of F-label on transferability assessment, since the perforamnce of the EMMS(one) and EMMS is identical in some rows. 4. It is recommended that the paper include a broader array of foundational models for F-label. Is CLIP currently the model with the highest effects? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: na Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: na Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the useful and helpful comments on our manuscript. We address the reviewer's concern below. **Q1:** "It is recommended to include a complete demo in the code." **A1:** Thanks for your suggestion. We have open-sourced complete code at https://github.com/anonymous123654/AnonymousEMMS. **Q2:** "The phrase "5.13×, ... in wall-clock time" appears to ... EMMS (One) method. Could you clarify why EMMS's performance varies, being faster than LogME in some instances and slower in others, as indicated in Table 1?" **A2:** It is true that we report the results (both $\tau_w$ and wall-clock time) of EMMS(one) in Table 1 for classification tasks. EMMS(one) is a variant of EMMS which replaces the F-Label with a one-hot label. Hence EMMS(one) can only be used in classification tasks. In image classification, we prefer to use EMMS(one) rather than EMMS because EMMS(one) achieves a good trade-off between effectiveness and efficiency. To see this, we list the computational complexity of LogME, EMMS(one), and EMMS in Table A. From Table A, we see that EMMS(one) have lower computational complexity than LogME. Moreover, our EMMS(one) allows for full vector computation and can be efficiently solved by existing scientific computation packages such as np.linalg.lstsq in our implementation. Nevertheless, LogME cannot be written in fully vectorized form because the optimization of model parameters of LogME are highly coupled. In addition, EMMS usually has higher computation complexity because $D_2 \gg C$. In some cases, when the number of categories $C$ and the iteration number $T$ are large, EMMS could be faster than LogME with vector computation. For example, we find that $C=397$ and $T=4.46$ on average over all models when LogME is convergent on the Sun397 dataset. It results in higher time complexity than LogME, as indicated in Table A. We further verify this by implementing LogME with $T=1$. As shown in Table B in the attached PDF, EMMS spends more time in calculating the transferability than LogME (T=1) on all datasets. However, LogME performs much worse than EMMS because it does not converge when $T=1$. **Table A.** The comparison of computational complexity between LogME, EMMS(one), and EMMS in image classification. We denote model feature $X \in R^{N \times D_1}$, F-labels $Z\in R^{N\times D_2\times K}$, and one-hot label $Y\in R^{N\times C}$ with $N \approx 10^4$, $D_1 \approx 10^3$, $D_2 \approx 10^3$, $K=3$, and $C\approx 10^2$. Moreover, $T$ denotes the iteration number of the inner loop in LogME with $T \approx 3$. | Method | LogME | EMMS(One) | EMMS | | --------------------------------- | --------------------------------- | --------------------------------- | ------------------------------------------------------------ | | **Complexity** | $3TCD_1^2 + (2T+1)NCD_1 + D_1^3 + ND_1^2$ | $CD_1^2 + NCD_1 + D_1^3 + ND_1^2$ | $ND_1^2 +2ND_1D_2 + D_1^3 + D_1^2D_2 + (K^2+K)(ND_2) + K^3 + K^2 +K\log K$ | | **Simplified Complexity** | $3TCD_1^2 + ND_1^2+ND_1C(2T+1)$ | $ND_1^2$ | $ND_1^2 + 2ND_1D_2$ | | **Vector Compuration** | ✗ | ✔ | ✔ | **Q3:** "Table 1, there seems to be some ambiguity about the effect of F-label on transferability assessment, since the performance of the EMMS(one) and EMMS is identical in some rows." **A3:** Thanks for the question. We claim that F-Label is the key ingredient of our EMMS. We demonstrate the effectiveness of F-Label in the following. F-Label has evident advantages over one-hot labels from our experiments. First, from Table 1 of the main text, EMMS (with F-Label) outperforms EMMS(One) on 5 datasets but performs worse than EMMS(One) on only 1 dataset. Moreover, EMMS (with F-Label) has higher $\tau_w$ than EMMS(one) on average over 11 datasets in total. Second, we have compared the effectiveness of F-Label and one-hot label when selecting pre-trained CNN-based models in Table 10 of the Appendix. We see that EMMS (with F-Label) outperforms EMMS(one) on 7/11 datasets when a single F-Label is used and outperforms EMMS(one) on 9/11 datasets when multiple F-Labels are used. These results demonstrate the effectiveness of F-Labels. In addition, we also show that F-Label encodes richer semantic information than one-hot labels. Hence, EMMS has an advantage in encoding semantic labels with a large category size in image classification. For instance, EMMS outperforms EMMS(one), such as CF100 (EMMS 0.736 v.s. EMMS(one) 0.745), Food (EMMS 0.579 v.s. EMMS(one) 0.673) and Sun397 (EMMS 0.592 v.s. EMMS(one) 0.619) in Table 1 of the main text. Besides, F-Label unifies diverse forms of labels in different tasks and facilitates multitask model selection. As shown in Table 2-5 of the main text, EMMS is a good model selector in image captioning, VQA, TQA, and referring expression comprehension. **Q4:** "It is recommended that the paper include a broader array of foundational models for F-label. Is CLIP currently the model with the highest effects?" **A4:** We have investigated the effect of different foundation models, including three language foundation models and three multimodal foundation models. Please see more details in [General Response Q1/A1](https://openreview.net/forum?id=2ep5PXEZiw&noteId=kK4DhbM069). We hope our responses help address the concerns of the reviewer. We are happy to run more experiments if the reviewer has any pieces of interest. --- Rebuttal 2: Comment: Thank you very much for your insightful suggestions on this paper. We have responded to each of these in great detail. If you have any other questions, we are more than happy to provide additional clarification as well as experiments, and look forward to your reply ! --- Rebuttal 3: Comment: We express our sincere appreciation for the valuable suggestions regarding this paper. In response, we have provided thorough and detailed explanations. If there are any further questions, we are delighted to offer additional clarifications and conduct further experiments. We eagerly look forward to your reply! --- Rebuttal 4: Comment: We would like to extend our heartfelt gratitude for the invaluable suggestions provided for this paper. In light of your feedback, we have diligently provided comprehensive and elaborate explanations. If you have any further inquiries, we are more than delighted to offer additional clarifications and conduct further experiments. We eagerly anticipate your response!
Summary: This paper focus on an under-explored problem of estimating neural network transferring capability without actually fine-tuning the multi-modal multi-task model on individual downstream tasks. The problem is well-motivated and is of great practical importance. The solution proposed in this paper is straightforward, which essentially is to encode the target label (in the text form) with a foundation model. The embedding of the label text is then treated as the target and a model's transferability can be estimated through a simple weighted linear regression, which is then solved by an alternating minimization algorithm. The paper experiments with 5 downstream tasks and 24 datasets, and show that the proposed estimation method (EMMS) is fast and effective on most tasks. Strengths: 1. The task is practically important and under-explored in previous work. As the authors mentioned, the previous estimation is usually limited by the classification tasks and the proposed work is more flexible as it directly encodes the label as a text sequence, therefore it can be used for multiple different types of tasks. 2. The proposed method is intuitive and straightforward, and the algorithms for solving the problem are clear. The theoretical proof looks reasonable although I didn't carefully check all the details in the equations. 3. The approximation with algorithm significantly speed up the computation when estimating the performance. 4. This paper performs extensive experimentations by fine-tuning the expensive models on many downstream tasks (to obtain ground truth) and the proposed approach shows superior performance over other methods with regard to correlation and the wall clock speed. Weaknesses: 1. The Figure 1 bottom is not very informative -- the radar chart is most effective for showing multivariate data with quantitative variables, but in this setting the variables are all binary (applicable/inapplicable), therefore it's actually carrying limited information. Probably replace the figure with a table of checkbox to illustrate the advantage of the proposed approach. 2. As I mentioned in the limitations, some important assumptions are not validated in the paper. Specifically the key assumption that directly leads to the weighted linear square regression problem and alternating minimization algorithm is the linear mapping from model feature space to the F-label space. More justifications about the assumptions are needed. 3. I'm also concerned about the choice of evaluation metric of weighted Kendall's $\tau_w$. If I understand correctly, as a ranking metric it focus more on the relative orders of the elements in the rank, but ignores the actual number. In other words for A>B>C>D, it doesn't care whether A is just marginally higher than B or is much higher than B. It seems wasteful since we already obtain the ground truth downstream task model performance through expensive fine-tuning, but with the $\tau_w$ metric only the relative orders of the ground truths are useful. Therefore I'm wondering if it's possible to directly estimate the actual performance. I might be wrong or missed some important considerations so any clarifications or rationales are appreciated. 4. Some minor issues: in the tables with actual results, some numbers are bolded as the best performance, but in many cases there is a tie between two numbers in the table. Probably it's better to highlight both. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: I'm curious if there are other metrics that can be used to compare the proposed methods with other approaches, e.g. other than $\tau_w$ and wall-clock time. For the former as I stated in the weaknesses section, it seems that it only cares about relative orders of candidate pre-trained models; for the latter the wall-clock time is easily affected by multiple factors (e.g. multi-threading, concurrent execution of other tasks in multi-core environments, etc.) I would like to learn more about the rationale for choosing these metrics. Also it seems in this paper only the linear mapping is considered and it is used as an assumption without further justification. This may need more clarifications as one can easily come up with alternative approaches for estimate the mapping from features to label embeddings. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: This paper has a dedicated section to discuss the limitations and I agree with the authors that the quality of the foundation model (which is mainly responsible for the encoding of the label text) plays critical roles in the effectiveness. However I think there exists some other limitations which are also worth mentioning. Specifically, some important assumptions in this paper is in L164 and L168, where the label embedding is assumed as a linear mapping of the model feature. This assumption seems harsh and I think a more thorough discussion about the assumption will be useful. There is no obvious potential negative societal impact with this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed comments and valuable suggestions. We address the reviewer's concern as follows. **Q1:** "The Figure 1 bottom is not very informative ..." **A1:** Good advice. We have redrawn a table of checkboxes to illustrate the ability of EMMS. Please see Fig.A and Table A in the attached PDF for more details. **Q2:** "... need justification for the assumption of linear mapping ..." **A2:** We are sorry for the insufficient justifications. Here we provide more details about the linear assumption. Specifically, EMMS assumes that the true label embedding $z$ is a linear mapping of the model feature with Gaussian noise. This assumption is commonly used in recent methods. For example, LogME [1] assumes that: $z \leftarrow N(w^T\hat{x},\beta^{-1})$, which implies that there is a linear mapping from the model feature space to the label space. PACTran [2] also has a similar setting. The difference is that LogME takes a one-hot label as the true label embedding, which limits its applicability. But our EMMS treat the label embedding $z$ as a hidden variable. And F-Labels $\{z_k\}_{k=1}^K$ obtained from different foundation models are assumed to be noisy oracles of true label embedding $z$. Since labels in many tasks can be easily encoded into F-Lables, our EMMS can be used as a multitask model selector. The linear assumption is reasonable in image and text classification tasks because a linear classifier is usually used when the pre-trained model is transferred to a target task. For tasks of VQA and image captioning, the label in these tasks can be viewed as pure text [3]. By common practice [4,5], these tasks can also be essentially treated as a multi-label classification (vocabulary-based classification). Hence, a linear assumption is also reasonable for these tasks. For the task of referring to image comprehension, we still encode the bounding box label as embedding following [2]. We find that a linear assumption still works well, although the task head is a transformer encoder. It is noteworthy that the computation of previous model selection methods is computationally expensive if we direct employ them with vocabulary-based one-hot multiple labels due to the large vocabulary size. F-Labels in EMMS can be deemed as a compressed representation of vocabulary-based one-hot multiple labels as indicated in Fig.3. Due to the simplicity of a linear assumption and informative F-Labels, EMMS can quickly measure the transferability of numerous pre-trained models, making it possible to be a fast and accurate transferability assessment. [1] Kaichao You, et al. LogME. In ICML 2021. [2] Nan Ding, et al. PACtran. In ECCV 2022. [3] Peng Wang, et al. OFA. In ICML 2022. [4] Antol et al. VQA. In ICCV 2015. [5] Goyal, Y., et al. Making the V in VQA matter. In: CVPR 2017. **Q3:** "I'm concerned about the choice of evaluation metric of weighted Kendall's ... if it's possible to estimate the actual performance." **A3:** Model selection is a well-defined problem in transfer learning, which aims to rank pre-trained models and select the top-performing model for the target task. Model selection techniques have been used extensively in classification tasks, such as LEEP, NLEEP, LogME, TransRate, PACTran, SFDA, and so on. The most common metric to measure the performance of a model selection method is the weighted Kendall's tau, i.e. $\tau_w$. Eqn. (15) of the Appendix gives the definition of $\tau_w$. In principle, a larger $\tau_w$ implies the transferability metric can rank pre-trained models better. And if a metric can rank top-performing models better, $\tau_w$ would also be larger. We also use other measurements, such as Person's coefficient, to assess the performance of transferability metrics in Table 9 of Sec. D in Appendix, where we can see that EMMS still outperforms LogME and other baselines under various measures. In addition, we agree that directly predicting the performance of a pre-trained model on the target task can be practical. But it is more challenging than model selection. Recent work [6] tackles this problem by proposing a benchmark consisting of ground-truth evaluations of 35 pre-trained models and 23 datasets. However, it still focuses on classification tasks, and it is unknown how well the prediction technique can be generalized to datasets in the training set. How to design a method to predict the performance on various target tasks requires ongoing efforts. [6] Orr Zohar et al. LOVM: Language-Only Vision Model Selection. **Q4:** "Some minor issues: in the tables with actual results, some numbers are bolded as the best performance, but in many cases, there is a tie between two numbers in the table. Probably it's better to highlight both." **A4:** Thank you for the suggestion. We will highlight all the best results in the final version. **Q5:** "For the latter, the wall-clock time is easily affected by multiple factors (e.g. multi-threading, concurrent execution of other tasks in multi-core environments, etc." **A5:** For the computation complexity, we measure the time for each metric on the same CPU device (AMD EPYC 7H12 with 64-Core Processor) three times and take the average as the final result. To avoid the influence of multi-threading, concurrent execution. We run each model selection method on a single dataset for a single testing case. The code for counting wall-clock time is available at https://github.com/anonymous123654/AnonymousEMMS. In addition, we also compare the theoretical computational complexity between EMMS and LogME in Table C of General Response Q2/A2. We can see that our EMMS not only has lower computation complexity but also enables fully vectorized computation. Please check more details in [General Response](https://openreview.net/forum?id=2ep5PXEZiw&noteId=kK4DhbM069). We hope our responses help address the concerns of the reviewer. We are happy to run more experiments if the reviewer has any pieces of interest. --- Rebuttal 2: Comment: Thank you very much for your constructive suggestions about the paper, it definitely helped us to improve it. If you have any other questions we are more than willing to continue with the clarifications and experiments, looking forward to your reply! --- Rebuttal Comment 2.1: Title: Thanks for the explanations! Comment: I've read the response from the authors and I'm satisfied with the answers. I am gonna increase my rating for this paper.
Summary: This paper proposes to utilize large-scale foundation models for efficient multi-task model selector (EMMS). Concretely, the authors utilize foundation model to transform different label format (category label, text, bounding boxes) into unified noisy label embeddings. EMMS then could measure the compatibility between the models’ features and corresponding 59 label embeddings via weighted linear regression. Experiments on 5 downstream tasks with 24 multi-modal tasks shows the proposed method's effectiveness and efficiency. Strengths: - the motivation of this paper is important: investigating how to evaluate foundation model on a set of multi-modal tasks without finetuning all the target tasks. - the writing is clear and easy to follow. - the authors also provide code for reproducibility. - The experiments show improvement on the 5 downstream tasks with 24 datasets. Weaknesses: As the authors mentioned in the limitation section, the proposed method is bottlenecked by the capabilities of the chosen foundation model. One could further ask, if the foundation model is good enough, why don't we just use the foundation model for the intended downstream tasks? For example, use CLIP for image classification? Technical Quality: 3 good Clarity: 3 good Questions for Authors: please refer to the weakness section Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and helpful suggestions. We have provided a detailed general response to the concerns of all the reviewers. We address the reviewer's concern as follows. **Q1:** "As the authors mentioned in the limitation section, the proposed method is bottlenecked by the capabilities of the chosen foundation model. One could further ask, if the foundation model is good enough, why don't we just use the foundation model for the intended downstream tasks? For example, use CLIP for image classification?" **A1:** It is a significant question. It is known that foundation models have achieved great success in various tasks. Why do we still need to perform model selection on some conventional models? We have provided some explanations in related work in Sec. 2. Here, we answer this question from three aspects. **First, although foundation models have strong generalization ability, it still obtains suboptimal performance on some tasks.** For example, the original paper on CLIP (see Fig. 5) [1] points out that zero-shot CLIP is worse than ResNet-50 on several specialized, complex tasks such as satellite image classification (EuroSAT and RESISC45) and lymph node tumour detection (PatchCamelyon). On the other hand, many existing moderate-size pre-trained models can do some domain-specific tasks very well. Our EMMS is complementary to foundation models, serving as a tool to select an appropriate pre-trained model for target tasks. **Second, foundation models have amounts of parameters which are computationally expensive and require significant computational resources to train and deploy.** In certain scenarios, such as on mobile devices where computational resources are limited, conventional models that are smaller and faster to train can be more practical and cost-effective. **Third, model selection is a well-defined challenging task in transfer learning [2,3,4,5], considering that different tasks may have specific requirements.** However, previous approaches [2,3,4,5] mainly focus on classification tasks. In this work, we extend the model selection to multimodal multitask scenarios by label embedding encoded from multiple foundation models, which essentially expands the applicability of the model selection technique. Similar to TaskMatrix.AI [6], we believe that EMMS could be a useful tool for task completion with the help of foundation models. We hope these experiments help address the concerns of the reviewer. We are happy to run more experiments if the reviewer has any pieces of interest. [1] Alec Radford, et al. Learning Transferable Visual Models From Natural Language Supervision. In ICML 2021. [2] Kaichao You, et al. LogME: Practical assessment of pre-trained models for transfer learning. In ICML 2021. [3] Wenqi Shao, et al. Not all models are equal: Predicting model transferability in a self-challenging 375 fisher space. In ECCV 2022. [4] Yandong Li, et al. Ranking neural checkpoints. In CVPR 2021. [5] Nan Ding, et al. PACtran: PAC Bayesian Metrics for Estimating the Transferability of Pre-trained Models to Classification Tasks. In ECCV 2022. [6] Yaobo Liang et al. TaskMatrix.AI: Completing Tasks by Connecting Foundation Models with Millions of APIs. --- Rebuttal 2: Comment: We appreciate the appreciation given by the reviewer towards our work. We are delighted to conduct additional experiments if the reviewer has any specific areas of interest. We eagerly await your response. --- Rebuttal 3: Comment: We are grateful for the kind appreciation expressed by the reviewer regarding our work. To further enhance our research, we are more than willing to conduct additional experiments if you need. We eagerly look forward to hearing from you! --- Rebuttal 4: Comment: We are thankful for the reviewer's acknowledgement of our efforts. We trust that the provided responses address the reviewer’s concerns regarding EMMS. We have presented sufficient explanations to clarify why we still need to perform model selection even at the time of the foundation model. We eagerly await any forthcoming questions and will be delighted to offer further clarification during the discussion stage.
Summary: This paper introduces EMMS, an efficient multi-task model selector for predicting the performance of pre-trained neural networks on multi-modal tasks without fine-tuning. EMMS employs large-scale foundation models to transform diverse label formats into a unified noisy label embedding. Through weighted linear regression and an alternating minimization algorithm, EMMS accurately estimates transferability. Experimental results demonstrate superior performance and significant speedup compared to existing methods. Strengths: 1. Generic Transferability Estimation Technique: The proposed method, Efficient Multi-task Model Selector (EMMS), offers a generic approach to estimate transferability. By utilizing a unified label embedding derived from foundation models and employing a simple weighted linear square regression (WLSR), EMMS becomes a fast and effective method for assessing the transferability of pre-trained models across different tasks. 2. Novel Alternating Minimization Algorithm: The paper introduces a novel alternating minimization algorithm specifically designed to solve the WLSR problem. This algorithm ensures efficient and accurate estimation of transferability within the EMMS framework. 3. The authors did extensive experiments across different tasks and the results demonstrate the efficacy and effectiveness of the proposed method. Weaknesses: 1. The proposed method is complicated and difficult to follow. I can barely understand how to use WLSR to maximize the log-likelihood, which may bring extra complexity for re-producing the method. 2. One of the simple baselines could be estimating the mutual information between the $\hat{x}$ and $Z$. I doubt the effectiveness of this baseline, but it could be nice to have. 3. More experiment details need to be provided like, what are the foundation models used in different tasks. Another ablation could be studying the effectiveness of different foundation models. I guess CLIP could be the strongest model to use in those tasks if we just use one single foundation model. Minor: Line 111, `denote finetuning score`. Please provide more description of Figure 3 (b) as it is not straightforward to understand the confusion matrix of image captions. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please address the comments in weakness section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable suggestions. We list responses to the reviewer's concerns below. **Q1:** "The proposed method is difficult to follow..." **A1:** We are sorry for the vague presentation of our method. In principle, our EMMS is well established by maximizing the log-likelihood of regression with multiple noisy oracles. We describe the detailed setup as follows. **First**, EMMS assumes that the true label embedding $z$ is a linear mapping of the model feature with Gaussian noise. Moreover, F-Labels $z_k$ obtained from different foundation models are noisy oracles of $z$. **Second**, by this setup, we write down the log-likelihood $\mathcal{L} = \sum_{(\hat{x},z_k)}\log P(z_1,\cdots, z_K|\hat{x})$ as given by Eqn. (2) of the main text. **Third**, with further simplification of Eqn. (2), we find that maximizing the log-likelihood can be approximated as a WLSR problem, i.e. Eqn. (3) of the main text. We sincerely suggest the reviewer see the detailed derivations in Sec. B.1 of the Appendix. In addition, we claim that our EMMS is easy to implement and can be used in various multimodal tasks. Despite the complexity of derivations, EMMS turns out to tackle a simple WLSR problem, which can be solved with several lines of code by our algorithm, i.e. Algorithm 2 of the main text. For reference, we make our full code public at https://github.com/anonymous123654/AnonymousEMMS. **Q2:** "One of the simple baselines could be estimating the mutual information ...." **A2:** Thanks for the suggestion. We agree that it is important to compare our EMMS with the approach based on mutual information modelling. In fact, we have compared our EMMS with TransRate [1] in Table 1 of the main text, which models the relationship between the model feature and the one-hot label by mutual information. From Table 1, we can see that EMMS outperforms TransRate on almost all classification datasets. To further validate the efficacy of EMMS, we compare it with TransRate using F-Labels on image classification. We estimate the mutual information of the model feature and F-label following TransRate. Specifically, denote model feature $X \in R^{N \times D_1}$, and the F-Label $Z_k \in R^{N \times D_2}$, we estimate the mutual information of $X$ and $Z_k$ after the discretization operation for each dimension of $D_2$ and then take average to obtain the final score. Moreover, we implement two baselines based on TransRate. When $K=1$, we instantiate the F-Label as the CLIP embedding. When $K=3$, we instantiate the F-Labels as the embedding collection extracted from the CLIP, BERT, and GPT-2. In this case, the final score is averaged over three F-Labels. The results are shown in Table A, where we can see that our EMMS consistently outperforms TransRate with F-Labels (both K=1 and K=3). **Table A.** Comparison of different transferability metrics on image classification regarding $\tau_w$ | Method | Aircraft | Caltech | Cars | CF10 | CF100 | DTD | Flowers | Food | Pets | SUN | VOC | Average | Sota-All | | ------------ | --------- | --------- | --------- | --------- | --------- | --------- | --------- | --------- | --------- | --------- | --------- |--------- |--------- | | TransRate(K=1) | 0.297 | 0.440 | 0.682 | 0.655 | 0.501 | 0.533 | 0.548 | 0.537 | 0.736 | 0.533 | 0.666 | 0.557 | 0-11 | | TransRate(K=3) | 0.295 | 0.441 | 0.682 | 0.523 | 0.501 | 0.542 | 0.548 | 0.539 | 0.730 | 0.533 | 0.679 | 0.546 | 0-11 | | **EMMS** | **0.481** | **0.444** | **0.706** | **0.718** | **0.745** | **0.620** | **0.562** | **0.673** | **0.740** | **0.619** | **0.730** | **0.639** | **11-11** | [1] Long-Kai Huan, et al. TransRate. In ICML 2022. **Q3:** "More experiment details need to be provided like, what are the foundation models used in different tasks." **A3:** Thanks for the suggestion. We have provided the details about the foundation models used in different tasks in Sec. C of the Appendix. For reference, we present it in Table B. Note that CLIP is ineffective in extracting meaningful embedding for pure text tasks without a dedicated design [1]. We use ELECTRA [2] for text question answering instead of CLIP. Electra is a different type of language model which pre-trains a generator model and shows promising results in various NLP tasks. [1] Libo Qin et al. CLIPTEXT. In ACL 2023. [2] Kevin Clark et al. Electra. ICLR 2020. **Table B.** Foundation models used in different tasks. | Task | Foundation Models | | -------- | --------- | |ImgCls, ImgCap, Referring Expression Comprehension, VQA | CLIP, BERT, GPT-2| | Text QA | GPT-2, BART, ELECTRA | **Q4:** "Another ablation ... I guess CLIP could be the strongest model ... single foundation model." **A4:** We have investigated the effect of different foundation models, including three language foundation models and three multimodal foundation models. Please see more details in [General Response Q1/A1](https://openreview.net/forum?id=2ep5PXEZiw&noteId=kK4DhbM069). **Q5:** "More description of Figure 3 (b)." **A5:** We are sorry for the insufficient explanations. Fig3(b) shows the correlation between captions from different images. Each image has two captions denoted by the brace. Two captions from the same image often exhibit similarity. Note that the one-hot label for a caption is translated from a vocabulary following [3]. The corresponding entry is 1 if the word corresponds to one index of vocabulary. We see that it is easier for EMMS to encode the semantic relevance of two captions corresponding to the same image than one-hot encoding. We will incorporate it into the final version. [3] Nan Ding et al. PACtran. In ECCV 2022. We hope these experiments help address the concerns of the reviewer. We are happy to run more experiments if the reviewer has any pieces of interest. --- Rebuttal 2: Comment: Thank you very much for your constructive suggestions. We have carefully considered and experimented with them. If you have any further questions, we are interested in conducting additional experiments on this topic. We look forward to hearing from you! --- Rebuttal 3: Comment: We greatly appreciate the insightful suggestions provided by you. We have carefully taken them into account and conducted experiments accordingly. In case there are any further queries, we are enthusiastic about executing more experiments. We eagerly await your response! --- Rebuttal 4: Comment: We greatly appreciate your helpful suggestions. We trust that the provided responses address the reviewer’s concerns regarding EMMS. We have presented more details about the mechanism behind EMMS and released the full code. We have conducted more experiments including a comparison with methods based on mutual information, and an ablation study of how each single foundation affects our EMMS. We eagerly await any forthcoming questions and will be delighted to offer further clarification during the discussion stage.
Rebuttal 1: Rebuttal: # General Response We thank the reviewers for their detailed reviews and thoughtful suggestions on our work. In general, two main concerns are raised, including (1) the effect of using a single foundation model and (2) the computational complexity of EMMS. To address the reviewers’ concerns, we try our best to perform additional experiments. **Q1: The effect of using a single foundation model to extract F-Label. Is CLIP the best model?** **A1:** We investigate the effect of using a single foundation model. We conduct experiments on image classification and image captioning. We consider EMMS with the single foundation model including language foundation model (1) GPT-2, (2) BERT, (3) RoBerta, and multimodal foundation model (4) CLIP, (5) FLAVA, and (6) AltCLIP. For comparison, we include the result of our EMMS with default setting (K=3, i.e. CLIP, BERT, and GPT-2) and the result of previous state-of-the-art methods obtained from LogME, NLEEP and TransRate. The results are reported in Table A and Table B. We have several observations. (1) Different downstream tasks prefer F-Labels obtained from different foundation models. No single foundation model is dominant in all target tasks. In particular, CLIP is not the best model for extracting F-Label. (2) For image classification, both language and multimodal foundation models are competent for acquiring F-Labels. For image captioning, multimodal foundation models are more appropriate for extracting F-Labels than language foundation models. (3) Our EMMS can achieve the best results by combining F-Labels obtained from multiple foundation models. **Table A**: The effect of the single foundation model in image classification. | Method | Aircraft | Caltech | Cars | CF10 | CF100 | DTD | Flowers | Food | Pets | Sun397 | Voc2007 | Average | Sota/All | | -------- | -------- | ------ | --------- | --------- | --------- | ----- | ------- | --------- | ----- | --------- | --------- | --------- | --------- | | Previous SOTA | 0.299 | 0.412 | 0.693 | **0.741** | 0.736 | **0.621** | **0.655** | 0.580 | 0.707 | **0.619** | 0.651 | 0.610 | 4/11 | | (1) GPT2 | **0.481** | **0.463** | 0.448 | 0.652 | **0.745** | **0.621** | 0.562 | 0.652 | **0.740** | 0.616 | **0.730** | 0.610 | 6/11 | | (2) Bert | **0.481** | 0.444 | 0.458 | 0.718 | **0.745** | **0.621** | 0.562 | 0.592 | **0.740** | 0.616 | **0.730** | 0.609 | 5/11 | | (3) RoBerta | 0.448 | 0.444 | 0.507 | 0.701 | **0.745** | 0.608 | 0.562 | 0.580 | **0.740** | 0.574 | **0.730** | 0.604 | 3/11 | | (4) CLIP | **0.481** | 0.444 | 0.496 | 0.608 | 0.720 | **0.621** | 0.562 | 0.558 | **0.740** | 0.616 | 0.706 | 0.595 | 3/11 | |(5) FLAVA | **0.481** | 0.444 | 0.508 | **0.741** | **0.745** | **0.621** | 0.562 | 0.652 | **0.740** | 0.574 | 0.706 | 0.615 | 5/11 | |(6) AltCLIP | **0.481** | 0.444 | 0.437 | **0.741** | **0.745** | **0.621** | 0.562 | 0.580 | **0.740** | 0.595 | **0.730** | 0.607 | 6/11 | | **EMMS (ours)** | **0.481** | 0.444 | **0.706** | 0.718 | **0.745** | **0.621** | 0.562 | **0.673** | **0.740** | **0.619** | **0.730** | **0.639** | 8/11 | **Table B.** The effect of the single foundation model on EMMS in image captioning. | Method | Flickr8k | Flickr30k | RSICD | flickr10kH | flickr10kR | Average | Sota/All | | ------------- | --------- | --------- | ----- | ---------- | ---------- | ---------- | ---------- | | LogME(CLIP) | 0.530 | 0.393 | 0.618 | 0.764 | 0.634 | 0.588 | 0/5 | | (1) Gpt2 | 0.566 | 0.393 | 0.431 | 0.715 | 0.618 | 0.545 | 0/5 | | (2) Bert | 0.395 | 0.319 | 0.448 | **0.802** | **0.711** | 0.535 | 2/5 | | (3) Roberta | 0.346 | 0.111 | 0.587 | 0.571 | 0.566 | 0.436 | 0/5 | | (4) CLIP | 0.510 | 0.448 | **0.704** | **0.802** | 0.678 | 0.628 | 2/5 | | (5) FLAVA | 0.463 | 0.382 | 0.693 | 0.704 | 0.678 | 0.584 | 0/5 | | (6) AltCLIP | 0.453 | 0.448 | 0.623 | **0.802** | 0.678 | 0.601 | 1/5 | | EMMS | **0.660** | **0.504** | **0.704** | **0.802** | 0.678 | **0.670** | **4/5**| **Q2: The computational complexity of EMMS.** **A2:** We compare the computational complexity between LogME and EMMS in Table C. We see that EMMS has lower computation complexity than LogME because LogME needs several iterations (T=3 on average) to converge. Moreover, EMMS allows for full vector computation and can be quickly solved by existing packages such as numpy.linalg.lstsq. But LogME cannot be written in fully vectorized form as the model parameters in LogME are highly coupled. Hence, LogME needs to be executed in a while loop. **Table C.** The comparison of computational complexity between LogME and EMMS. LogME is fed with F-Label. We denote model feature $X \in R^{N \times D_1}$ and F-labels $Z\in R^{N\times D_2\times K}$ with $N \approx 10^4$, $D_1 \approx 10^3$, $D_2=1024$, $K=3$, and $C\approx 10^2$. Moreover, $T\approx 3$ denotes the iteration number of LogME. | Method | LogME | EMMS | | --------------------------------- | --------------------------------- | ------------------------------------------------------------ | | **Complexity** | $3TD_1^2D_2+(2T+1)ND_1D_2+D_1^3+ND_1^2$ | $ND_1^2+2ND_1D_2+D_1^3+D_1^2D_2+(K^2+K)(ND_2)+K^3+K^2+K\log K$ | | **Simplified Complexity** | $3TD_1^2D_2+ND_1^2+ND_1D_2(2T+1)$ | $ND_1^2+2ND_1D_2$ | | **Vector Compuration** | ✗ | ✔ | We are happy to run more experiments if the reviewer has any pieces of interest. Pdf: /pdf/74d37431b7f7aae5ac4f6ba26f70e5656d8e6e2e.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces the Efficient Multi-task Model Selector, which utilizes foundation models to convert diverse labels into unified label embeddings. These embeddings are then used to calculate a transferability metric within a weighted linear square regression (WLSR) framework. The proposed method achieves impressive results in multi-task scenarios. Strengths: 1. The paper presents a novel and unified strategy for model selection. The use of label embeddings enables the capturing of fine-grained semantics, leading to improved estimation in downstream tasks. 2. The effectiveness of the proposed method is supported by a comprehensive set of experiments covering various downstream tasks. 3. The derivation of equations is provided, establishing a solid foundation for the main contribution of the paper. Weaknesses: No questions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: No questions. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the recognition of our work. We are happy to run more experiments if the reviewer has any pieces of interest. --- Rebuttal Comment 1.1: Comment: Dear authors, Thanks for your response. I have thoroughly reviewed both your rebuttal and the feedback provided by the other reviewers. I acknowledge that the problem you've proposed is indeed interesting and holds value. However, it appears that the current version of the submitted paper lacks certain essential ablation experiments. Considering these factors, I have chosen to cast my vote as borderline accept, and I am eagerly anticipating the revised version of the paper. --- Reply to Comment 1.1.1: Title: Thank you for your reply Comment: Thank you for your response. During the rebuttal, we conducted additional ablation experiments including (1) using more single foundation models to test the effect of EMMS, (2) additional baseline measured with mutual information and comparing the effect, and (3) comparing the computational complexity between LogME and EMMS. **Firstly**, for (1), we discover that different foundation models are preferred for different tasks and CLIP is not always the best foundation model. This is described in detail in [General Response A1](https://openreview.net/forum?id=2ep5PXEZiw&noteId=kK4DhbM069). **Secondly**, we add a new baseline in terms of mutual information metrics to illustrate the superiority of EMMS, which is displayed in [Reply to Reviewer vJKA A2](https://openreview.net/forum?id=2ep5PXEZiw&noteId=OKtcHuyCuX). **Lastly**, for (3), we have carefully compared the time complexity of EMMS and LogME, as detailed in [Reply to Reviewer 4U2k A2](https://openreview.net/forum?id=2ep5PXEZiw&noteId=rxHsFLXxmy). Besides, the NeruIPS policy recounts that the rebuttal phase is a vital part of the review process, as it offers authors an opportunity to address concerns and clarify misunderstandings. We think that the experiments added during the rebuttal should also be taken into account. We sincerely suggest the reviewer check our detailed response in the corresponding part. Thanks for your suggestion again!
null
null
null
null
null
null
Learning Invariant Representations with a Nonparametric Nadaraya-Watson Head
Accept (poster)
Summary: This paper proposes a novel algorithm for learning invariant representations using a fixed head that is the sum of similarities of the query features with a set of support features, weighted by one-hot-encoded class label (in other words, the head predicts the class whose support features most align with the query features). Given this fixed head and a training dataset, the proposed method entails optimizing for a representation that solves a constrained maximum likelihood estimation problem. Experimental results evaluating the proposed technique on three benchmark datasets against a variety of baselines are provided. Strengths: 1. The paper does a good job of formulating the problem and describing the proposed approach. The writing is generally clear. 2. The motivating application to medical image classification in different domains is strong, as this is an important and relevant unsolved problem. 3. The proposed approach is intriguing as it removes the bilevel complexity from the IRM problem, as it fixes a head rather than learning one. 4. The empirical results are rigorous and promising, as they show state-of-the-art or competitive performance by certain variations of the proposed method on three benchmark datasets. 5. The authors clearly acknowledge the limitations of the proposed method. Weaknesses: 1. Practically speaking, the proposed method is computationally expensive because it requires computing pairwise comparisons between the representations support and query samples during training and inference, as the authors acknowledge. However, the support batch size is small during training, and the experiments show that using the Cluster method for inference can allow for doing inference with a small number of support samples without much drop in performance, meaning this is not a huge issue. Still, the empirical results are not overly impressive, as the performance of the best variant essentially matches the baseline [50]. It would be helpful to compare the computational costs of the proposed approach with [50], or run further experiments demonstrating a more clear advantage. 2. The motivating application is strong, and from this it is clear why we want estimators that satisfy conditions 1) and 2) in lines 127 and 128. It is also clear that the proposed estimator with the NW head satisfies these conditions. But it is not clear why we need the NW head in order to satisfy these; perhaps a simpler estimator that does not require using a support set of images for every query image could also satisfy 1) and 2). 3. The Introduction and/or Related Works would benefit from a more detailed discussion of the criticisms of IRM, especially relating to how the proposed method can alleviate these criticisms. Granted, IRM is compared with in Section 4.4, but this section does not discuss whether the proposed method addresses the concerns with IRM posed by [15,21,39]. --------- Post-rebuttal: I have raised my score, please see comment below. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The statement "the objective is equivalent to unconstrained maximum likelihood under the assumption in Eq. 1" is not clear to me, since Eq. 1 only applies for some particular g_C. 2. The random inference mode does not make sense to me: how can the sampling be uniform across the dataset if each class is represented k times? 3. How are the features for NW Probe trained? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Practically speaking, the proposed method is computationally expensive because… [...] It would be helpful to compare the computational costs of the proposed approach with [50], or run further experiments demonstrating a more clear advantage. We agree with the reviewer that computational cost is the primary limitation of our method. The computational costs of parametric approaches like [50] are very similar in practice to ERM, which we provide details of in Table 4. While we agree that empirical results of our proposed method are not substantially improved over [50], we believe our non-parametric regularization approach to enforcing invariance (whether explicit or implicit) is more natural and intuitive than baseline methods because an environment is encoded by manipulating the support set to contain real samples only from that environment. Other baseline methods use proxy methods to achieve this, for example by aligning calibration [2], intermediate layer activations [3], or variances [4] across environments. For other advantages of our method, we refer the reader to our general response. > The motivating application is strong, and from this it is clear why we want estimators that satisfy conditions 1) and 2) in lines 127 and 128. [...]. But it is not clear why we need the NW head in order to satisfy these; perhaps a simpler estimator… Satisfying 1) requires intervening on Y in the causal DAG. In practice, this can be achieved by matching the environment-specific prior for Y (e.g. by sampling balanced mini-batches during training). The approach we take is by balancing classes in the support set, which we view as a minor contribution that fits well with our theme of support set manipulation. Satisfying 2) essentially defines the invariant representation learning task and many prior works seek to satisfy it. Generally, this is achieved by regularizing the standard classification loss with constraints which align representations across environments (e.g. aligning correlations of layer activations or aligning model calibration). With the NW head, however, this can be achieved naturally, by manipulating the support set. > The Introduction and/or Related Works would benefit from a more detailed discussion of the criticisms of IRM, especially relating to how the proposed method can alleviate these criticisms. Granted, IRM is compared with in Section 4.4, but this section does not discuss whether the proposed method addresses the concerns with IRM posed by [15,21,39]. Prompted by this suggestion, we have elaborated further in Section 4.4 discussing how the proposed method addresses the concerns with IRM. We provide an overview of these elaborations here. The IRM objective seeks a representation such that the optimal (parametric) linear classifier’s parameters on top of this representation is the same across all environments. The main concern raised by the aforementioned works (foremost in [21]) is the tractable version of this objective (IRMv1 in the IRM paper). This tractable objective assumes convexity and is complex, requiring computation of the Hessian. The NW head side-steps these issues by eliminating the need to learn the linear classifier parameters at all – instead, the classifier is *fixed by construction* since it has no learnable parameters. Essentially, we can enforce invariance through design of the support set, providing a much more intuitive and computationally simpler objective to optimize. Note that from an empirical standpoint, our experiments with NW-probe also address this issue. NW-probe can be directly compared against IRM, as both have exactly the same architecture and number of learnable parameters; additionally, the causal assumptions and the constraints imposed are theoretically identical. Interestingly, we find that NW-probe outperforms IRM in the datasets we try, suggesting that our $NW^B_e$ training strategy does learn better invariant representations. > The statement "the objective is equivalent to unconstrained maximum likelihood under the assumption in Eq. 1" is not clear to me, since Eq. 1 only applies for some particular g_C. Apologies for the confusion; our purpose in including this statement was to highlight the similarity between our objective and that of maximum likelihood estimation. Specifically, the particular $\phi$ and $g_C$ that satisfies the constraint in Eq. 5 (which is derived from Eq. 1) allows us to essentially drop the conditioning on $e_i$, which is the subscript for the probability estimator being maximized. Thus, we could equivalently write the objective as: $$ \arg\max_{\phi, g_C} \sum_{i=1}^N \log \hat{P}^B (y_i \mid g_C(x_i); \phi) $$ $$ s.t. \ \hat{P}^B_e (y_i \mid g_C(x_i); \phi) = \hat{P}^B_{e'} (y_i \mid g_C(x_i); \phi), \ \ \forall i \in \{1, ..., N\}, \ \forall e, e'\in E. $$ This formulation highlights the fact that our objective is simply maximum likelihood estimation with an additional invariance constraint. > The random inference mode does not make sense to me: how can the sampling be uniform across the dataset if each class is represented k times? Each class is represented k times, but which images from each class are present in the support set is chosen uniformly at random from the full training set. > How are the features for NW Probe trained? NW-probe is trained in two stages. In the first stage, we train the feature extractor weights ($\varphi$) only using an $NW^B_e$ training strategy (Eq. (7) or (8)). In the second stage, we freeze $\varphi$ and train a linear probe ($w$) on top of the frozen representations. --- Rebuttal 2: Title: Respond to authors' rebuttal Comment: Please, look at the authors' rebuttal and the other reviewers' comments and indicate if you would like to change anything in your review. --- Rebuttal Comment 2.1: Comment: Thank you to the authors for your detailed responses. After reading all the responses, I have a better understanding of the motivation for and demonstrated advantages of using the NW head. I have decided to raise my score from 4 to 5. --- Reply to Comment 2.1.1: Title: Response to reviewer f7MY Comment: We thank reviewer f7MY for their prompt response and positive feedback.
Summary: The authors address an important problem of reliability in deep learning, given data collected from different sources (environments). For this, the authors propose a method that allows the separation of style and content of input objects, which ensures stable behavior in different environments. The authors develop a causality graph and show how to embed causal assumptions into the model. For a concrete model, the authors use a nonparametric method based on the Nadaraya-Watson (NW) head. For predictions, they leverage the NW head, which uses learned representations and a support set of labeled data. They conduct experiments on several datasets with real-world environment variability, and experiments show the competitive performance of the approach. Strengths: 1. The paper addresses an important problem and provides an interesting solution based on Nadaraya-Watson kernel regression, which is interesting. 2. In general, I find the paper well-written, well-structured, supplied with nice illustrations and easy to follow. 3. The choice of datasets for experiments is interesting and challenging. Weaknesses: 1. The overall novelty and the extent of the contribution made by this paper are unclear. The use of the Nadaraya-Watson (NW) head on top of a feature extractor has been previously used [1, 2]. Furthermore, it's not clear whether the manipulation of the support set can be considered as a novel idea. It would be beneficial for the authors to explicitly state the specific contributions of their paper in the introduction section. 2. There appears to be a missing reference to Figure 3 in the paper. 3. No code available, so I can not check if experiments are reproducible. [1] Alan Q. Wang and Mert R. Sabuncu. A flexible nadaraya-watson head can offer explainable and calibrated classification. Transactions on Machine Learning Research, 2023. [2] Kotelevskii, Nikita, et al. "Nonparametric Uncertainty Quantification for Single Deterministic Neural Network." Advances in Neural Information Processing Systems 35 (2022): 36308-36323. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. It is well known in the literature related to the methods which work with embeddings (see, for example, [3, 4]) that there might be a feature collapse. So that for the small perturbation in the input, the change in the output is arbitrary. Did you use any techniques to prevent it, like spectral normalizations? 2. Typically, in kernel methods, there is a scale parameter (e.g., bandwidth) that is very important for performance and requires careful selection. As I see from Eq. 2, you simply use negative Euclidean distance. Is it safe? Might there be a case that embedding from one environment a scattered wider than from another, and it is required to use different scale parameters to fit them well? 3. In [2], authors also use NW head to a potentially big dataset, and to ease computations, they use approximate nearest neighbors algorithms. It would be interesting to apply this option here as well since the "random" inference scheme might be too noisy, "full" too expensive, and others too rough. [3] Liu, Jeremiah, et al. "Simple and principled uncertainty estimation with deterministic deep learning via distance awareness." Advances in Neural Information Processing Systems 33 (2020): 7498-7512. [4] van Amersfoort, Joost, et al. "On feature collapse and deep kernel learning for single forward pass uncertainty." arXiv preprint arXiv:2102.11409 (2021). Edit: I would like to thank the authors for their answers during rebuttal period. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Authors discussed limitations in the main part of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The overall novelty and the extent of the contribution made by this paper are unclear… We have enumerated our specific contributions in the general response. To reiterate here, to the best of our knowledge, we believe that the manipulation of the support set for learning invariant features is a novel approach to invariant learning and the broader goal of domain generalization. We are not aware of any prior work that approaches invariant representation learning from the perspective of a manipulation of support set from a non-parametric perspective. In our revised manuscript, we plan to explicitly add a Contributions subsection in the Introduction section. > There appears to be a missing reference to Figure 3 in the paper. Thank you for pointing out this omission – we will add this missing reference in our revised manuscript. > No code available, so I can not check if experiments are reproducible. To maintain anonymity in the initial submission, we withheld a link to our code. We intend to make it available upon acceptance. Note that our code and dataset relies heavily on the NW head repository [1] and WILDS benchmark [6], thus facilitating easy reproduction and comparison. Both repositories are publicly available. > It is well known in the literature related to the methods which work with embeddings (see, for example, [3, 4]) that there might be a feature collapse. [...] Did you use any techniques to prevent it, like spectral normalizations? In our experiments, we have not observed the feature collapse phenomenon. Other works have found similar results; for example, the authors in [2] find little difference when adding spectral normalization (see Table 6 in [2]). The feature collapse phenomenon seems to arise in the context of Gaussian processes and its extension to deep networks, Deep Kernel Learning. These methods require complicated training schemes that minimize the ELBO, which can be decomposed into a data fit term and a complexity penalty term. From [4], the authors associate feature collapse with this complexity term. However, in the NW head, training is performed with a standard maximum-likelihood loss and thus is relatively simpler than GP-based methods. In particular, there is no complexity term which can cause feature collapse. Put another way, theoretically speaking, the NW head departs from standard parametric classification only from an architectural standpoint, and not from a training/inference/optimization standpoint. > Typically, in kernel methods, there is a scale parameter (e.g., bandwidth) that is very important for performance and requires careful selection…. We did not tune the bandwidth in our experiments. The reason for this is that we optimize both the feature extractor and classifier end-to-end, and the kernel used in the classifier is dependent on the features that the feature extractor learns (unlike [2], e.g.). Effectively, we allow the feature extractor to optimize the bandwidth during training (note the same approach is taken in prior works, see [5]). Nevertheless, we take the reviewers point and believe the exploration of kernel bandwidths (and other kernels) might be an important future work (as we allude to in the Conclusion section Lines 296-297). We tend to disagree with the reviewer’s point that the embeddings from one environment might be scattered wider than another environment, since our optimization (Eqs. 7 or 8) enforces (either explicitly or implicitly) the embeddings from one environment to match the embeddings from another. Our additional results in Figure 9 shed some light on this hypothesis from an empirical perspective – we observe that the nearest neighbors of query images in the feature space come from all 3 training environments evenly. This indicates that the model relies more evenly across all 3 environments to make its prediction, and further suggests that representations are invariant across environments. > In [2], authors also use NW head to a potentially big dataset, and to ease computations, they use approximate nearest neighbors algorithms. It would be interesting to apply this option… Thank you for this interesting suggestion. Prompted by this, we have added these results in Table 5 in the corresponding PDF. In it, we show additional results for K-NN and HNSW (Hierarchical Navigable Small Worlds), a fast approximate nearest neighbor algorithm [7]. Overall, we observe that k-NN and HNSW perform nearly identically for all the datasets and variants we tried. Additionally, both modes perform better in terms of mean performance on Camelyon-17, and otherwise perform about as well as the best-performing modes for ISIC (Cluster) and FMoW (Ensemble). However, they generally have higher variances across model runs. We suspect this is because there are less total samples used in the support (20 vs. more than 1000 for Full), and also not all classes are guaranteed to be represented in the support, leading to more unstable results. [1] Wang et al. A flexible nadaraya-watson head can offer explainable and calibrated classification, 2023. [2] Kotelevskii et al. Nonparametric Uncertainty Quantification for Single Deterministic Neural Network., 2022 [3] Liu et al. Simple and principled uncertainty estimation with deterministic deep learning via distance awareness, 2020 [4] van Amersfoort et al. On feature collapse and deep kernel learning for single forward pass uncertainty, 2021 [5] Snell et al. Prototypical networks for few-shot learning, 2017 [6] Koh et al. Wilds: A benchmark of in-the-wild distribution shifts, 2021. [7] Malkov et al. Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs, 2016. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their detailed answers! My questions and concerns are well addressed. I have no further questions for the authors, and I increase my evaluation score by one. --- Reply to Comment 1.1.1: Title: Response to reviewer xs7Q Comment: We thank reviewer xs7Q for their prompt response and positive feedback. --- Rebuttal 2: Title: Respond to authors' rebuttal Comment: Please, look at the authors' rebuttal and the other reviewers' comments and indicate if you would like to change anything in your review.
Summary: The authors apply a recently proposed method for similarity-based prediction to the problem of invariant learning. This paper builds off of the Nadaraya-Watson architecture where predictions for a test input are derived via nearest proximity according to a learned kernel. The proposed method uses a NW head for each training environment, plus some additional regularization to encourage invariance over an assumed causal graph. Strengths: Regularization-based approaches to invariant learning are known to entail optimization difficulties [https://proceedings.mlr.press/v162/zhang22u.html]. The non-parametric prediction approach taken by the authors provides a unique and interesting alternative. The proposed method shows promise on relevant datasets. I was especially interested in the probe variant of NW-training, which indicates a more invariant internal representation than IRM. Weaknesses: While the proposed approach is novel and interesting, after reading the paper I was still left with questions as to how the method is implemented, and why it works (see questions below). Also, I think the paper could benefit from some simple theoretical analysis that demonstrates (even in a simplified setting) when we expect NW-based training will discover invariant features. I also feel that the proposed approach, which uses causal graphs to motivate independances on subsets of the training data, is very related to recently proposed MMD-based approaches [https://proceedings.mlr.press/v151/makar22a, https://arxiv.org/abs/2209.09423]. A discussion on how the proposed method (and its use of non-parameteric prediction) differs from these papers would be useful. The fact that so many different variants of the NW method are tried makes me a bit wary of claims like "NW^B does X percent better than ERM" [line 256]. Not all the NW^B flavors outperform ERM. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * What is the role of the support set in OOD generalization? Are training samples always used to make predictions at test time? Empirical evidence suggests OOD generalization is possible, but since the method relies on domain-labeled training examples to make predictions, is there some implicit assumption about how domains are related? For example in the IRM paper they discuss the relationship between domains geometrically in terms of “linear general position”. * How is the kernel bandwidth (called "temperature" in the NW head paper) chosen? This seems like a critical hyperparameter but I didn't see any discussion of it. * Why is it more effective to regularize a non-parametric predictor (Eqn 7) than to directly regularize a standard classification loss? * With most invariant learning methods there is a tradeoff between in-distribution and out-of-distribution generalization. Do you see that pattern with your method? Table 2 only reports OOD generalization if I understand correctly Misc comments: * The proposed method “leverages the NW head as a conditional estimator for Y conditioned on Z” [143–144]. The workshop paper “Towards Environment-Invariant Transfer..." by Eyre et al (https://openreview.net/forum?id=c4l4HoM2AFf) may be of interest because they also use kernels to estimate similar statistics of the learned representations. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Limitations are addressed [Sec 6]. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > [...]. A discussion on how the proposed method differs from these papers would be useful. We agree with the reviewer and reiterate that our causal setup and DAG in Fig. 2b is not novel, and has been proposed in many works within this literature (see also [1-2]). As mentioned in the general response, our primary contribution in this work is manipulation of the support set in an NW head during training for learning invariant and robust representations, and for the broader purpose of domain generalization. To the best of our knowledge, ours is the first work to approach domain generalization in a non-parametric manner, as well as the first work to experiment with manipulating the support set to inject prior information into the training process. > The fact that so many different variants of the NW method are tried makes me a bit wary of claims like "NW^B does X percent better than ERM" [line 256]. Not all the NW^B flavors outperform ERM. To clarify, we only make the claim that NW^B outperforms ERM on the imbalanced ISIC dataset, where balancing theoretically offers a potential OOD advantage. For other more balanced datasets (Camelyon-17 and FMoW), NW^B does not improve over ERM, as one might expect. To demonstrate this further, we have trained an ERM variant with balanced classes per environment for ISIC, which we denote $ERM^B$, and present the results below. | | F1 score | | ----------- | ----------- | | $ERM$ | 58.2 (2.9)| | $NW^B$, Ensemble|63.9 (3.8)| | $ERM^B$|63.0 (2.5)| We find that performance is on-par with $NW^B$, as is expected. > What is the role of the support set in OOD generalization? Are training samples always used to make predictions at test time?...is there some implicit assumption about how domains are related? The reviewer is correct in that the support set is composed of the training set in order to make predictions at test time. Note that we only rely on domain-labeled training examples to make predictions in Ensemble inference mode, and not any other inference mode (Random, Full, and Cluster). For Camelyon and ISIC datasets, the use of domain information at prediction doesn’t make much difference in optimal performance (e.g. compare $NW^B_e$ performance for both Full and Ensemble). Theoretically, we do not make any assumptions about how domains are related other than assuming that if any X has a non-zero probability in one environment, it has a non-zero probability in all environments (this is akin to the positivity assumption in causal inference). We agree with the reviewer that this is an important question that, while we believe is out of scope for our current manuscript, should and has been further explored in other works (e.g. see [1]) > How is the kernel bandwidth (called "temperature" in the NW head paper) chosen? This seems like a critical hyperparameter but I didn't see any discussion of it. The bandwidth hyperparameter (i.e. temperature) was set to 1 for all experiments. Thank you for pointing out this omission – we have added this detail in the revised manuscript. We did not tune this hyperparameter. The reason for this is that we optimize both the feature extractor and classifier end-to-end, and the kernel used in the classifier is dependent on the features that the feature extractor learns (unlike [6], e.g.). Thus, we let the feature extractor to figure out the bandwidth on its own (note the same approach is taken in prior works, see [5]). Nevertheless, we take the reviewers point and believe the exploration of kernel bandwidths (and other kernels) and its relation to optimization procedures might be an important future work (as we allude to in the Conclusion section, lines 296-297). > Why is it more effective to regularize a non-parametric predictor (Eqn 7) than to directly regularize a standard classification loss? We note that the first term in Eq. 7 is a standard cross-entropy classification loss. The second term in Eq. 7 enforces invariance across environments by forcing predictions from two different environments to be the same for a given input. We argue that our non-parametric regularization approach to enforcing invariance is more natural and intuitive than baseline methods because an environment is encoded by manipulating the support set to contain real samples only from that environment. Other baseline methods use proxy methods to achieve this, for example by aligning calibration [2], intermediate layer activations [3], or variances [4] across environments. However, as we mention in our general response, the result we find the most compelling in this work is the Implicit variant (NWbe-Implicit), which is competitive with and often outperforms state-of-the-art baselines while requiring no hyperparameter to tune. Such an implicit approach is difficult to achieve with a parametric strategy. > With most invariant learning methods there is a tradeoff between in-distribution and out-of-distribution generalization. Do you see that pattern with your method? Below, we show in-distribution (ID) and out-of-distribution (OOD) results for Camelyon-17, which provides an ID validation set. | | ID | OOD | | ----------- | ----------- | ----------- | | $ERM$|93.2 (5.2)|70.3 (6.4)| | $NW^B$, Full|96.1 (1.0) |72.0 (6.7)| | $NW^B_e$, Full|92.8 (2.0)| 80.0 (2.7)| We observe that there is a tradeoff between ID and OOD performance, similar to prior work. We plan to add these results to our revised manuscript. [1] Rosenfeld et al. The risks of invariant risk minimization, 2021. [2] Wald et al. On calibration and out-of-domain generalization, 2021 [3] Sun et al. Deep coral: Correlation alignment for deep domain adaptation, 2016 [4] Krueger et al. Out-of-distribution generalization via risk extrapolation (rex), 2021. [5] Snell et al. Prototypical networks for few-shot learning, 2017. [6] Kotelevskii et al. Nonparametric Uncertainty Quantification for Single Deterministic Neural Network. 2022. --- Rebuttal Comment 1.1: Title: acknowledging author rebuttal Comment: I have read the author response and other reviews. Thank you to the authors for acknowledging my concerns and helping me to better understand the implementation (temperature selection, etc.). I don't think we are on the same page about the ISIC results. Even if I look at just that column, I see some $NB^B$ flavors that do better than ERM, and some that do worse. I really appreciate the completeness of the experiment design w.r.t. trying different baselines and variants of the proposed method. I'm just not totally comfortable with how the results are described. Correct me if I'm missing something. I think the paper will indeed be stronger by discussing more related works. But the submission and rebuttal as-is, especially given that a theoretical analysis of when NW can succeed is out of scope, is not enough to move the needle for me. I maintain my score. --- Reply to Comment 1.1.1: Title: Clarification about $NW^B$ results Comment: We thank the reviewer for their prompt response. Apologies for the confusion about the $NW^B$ results. Looking at the table of results for the ISIC dataset, the reviewer is correct that Random mode for $NW^B$ (56.7) is indeed lower than ERM (58.2). Random mode is the worst-performing of all the inference modes in general. This might be expected as it uses the least number of elements per support and thus has the least support information to make a prediction. We agree that this should be specified clearly in the manuscript, which we will update. Sorry again for the confusion!
Summary: This works proposes a causally motivated new setting for domain generalization building on the existing nonparametric Nadaraya-Watson head on top of a learned neural encoder. More precisely, it assumes that data inputs are causally generated from style latent features, which are environment-dependent, and content latent features, which are environment-independent after class-wise balancing. The authors hence try to learn these content features with a neural network. Predictions are made using the classical Nadaraya-Watson classifier on the learned representations of some support points. Training is carried with gradient-based maximum-likelihood, both with and without an additional penalty that encourages environment-independence of representations. At inference, fixed support points from the training set are chosen following one out of four strategies. The proposed method is compared to well-known domain generalization approaches in three datasets. Strengths: ### Originality The Nadaraya-Watson head is not novel, nor the causal framework proposed for domain generalization. Hence, the originality of the paper lies in using this nonparametric method for this purpose, motivated by the fact that the choice of the support set provides “a degree of flexibility not possible with parametric models”. This seems enough to say that the work is original and potentially significant for the subfield. ### Clarity I found the paper very well-written and structured. Figures are clean and the method description is clear. ### Quality The paper presents interesting results in a rather fair setting, with error bars and well explained experimental protocol. ### Significance This work addresses the very important and difficult question of domain generalization with a very creative approach: non-parametric learning coupled with neural representations. Weaknesses: ### Clarity The only remark I would have is that the difference between NWb and NWBe-implicit could be better explained and highlighted in the paper (it took me a while to understand the subtle difference between them). Typos: - l. 255 “and” ### Quality Despite the very positive points listed above, I still have a few concerns (from the more important to the least): 1. My main remark here concerns the pros and cons of the method, which are not very well highlighted in my opinion. While the idea is very interesting and it is easy to understand why having access to support points can help for domain generalization, I don’t think this is very well illustrated in the experiments. Results from table 2 show that NWbe is equivalent to CLOvE in Camelyon-17, worse than it in FMoW and (given the high uncertainties) only slightly better than it on ISIC (the only dataset on which CLOvE was reimplemented, according to the Appendix). Given the important computational caveats of the NWbe method (2x longer training and 8-16x longer inference than ERM) and additional overhead (double or triple batches, additional hyperparameters like number of cluster, strategy, etc), I find important to highlight and illustrate what benefits the method has compared to other existing methods which lead to comparable results. For example, it is mentioned in the conclusion that the method has interpretability advantages but this is not shown in the experiments. 2. Another point that surprised me was the fact that the implicit training (without regularization) led to comparable or better results than the regularized version. How do you explain that? I believe this point needs more investigation as it could indicate that the reason your method works is not exactly the ones explained (learning of environment-independent content latent features). It could be interesting for example to evaluate for both implicit and explicit models the invariance of learned neural embeddings to environment changes. 3. Is it fair to compare IRM to your representations + MLP given that IRM is trained with a simple linear classifier? 4. In figure 7 of the appendix, the rightmost accuracies for NWb (close to 90%) are substantially higher than those reported in table 2 (around 60%). What is the reason? 5. Also regarding this figure 7, I am curious why the balanced plot increases quite fast as the test set becomes more and more imbalanced. It is a bit surprising given that the method consists in class-balancing at training isn’t it ? Have you investigated why? 6. Also regarding balancing, I think that an additional baseline corresponding to ERM with class-balancing could be useful to better understand what is bringing the deltas in performance between ERM and NWb in table 2: is it the balancing or the NW head or both? 7. How was the number of clusters $k$ set? I see in appendix it was set to 3, but an analysis of its impact or at least a comment explaining why such hyperparameter is not a sensitive one could be a plus. ### Significance Despite the very interesting method and important questions addressed, I think the paper experimental part still needs a little work. Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: Question above Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 4 excellent Contribution: 2 fair Limitations: Limitations of the method are cited and discussed in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The only remark I would have is that the difference between NWb and NWBe-implicit could be better explained and highlighted in the paper. We plan to add additional clarifying statements in our revised manuscript. To be clear, the difference between them is that in $NW^B_e$-implicit, we not only balance across classes (B), but also condition the support on an environment (e). The (-implicit) indicates that we don’t explicitly regularize the domain invariance (Eq. 7), but enforce it implicitly through the training process (Eq. 8). > My main remark here concerns the pros and cons of the method, which are not very well highlighted in my opinion. [...] Thank you for raising this important question. We have addressed it in the general response, and kindly refer the reviewer there. > Another point that surprised me was the fact that the implicit training (without regularization) led to comparable or better results than the regularized version. How do you explain that? In Implicit (Eq. 8), the constraint will be approximately satisfied in the sense that the model will be encouraged to predict the identical ground truth across all environments for an input. As an example, consider a classification task with 3 environments, and consider 3 images from the 3 environments each with label 0. Given infinite data and infinite model capacity, the model will predict a label 0 for each of these images, thus achieving invariance across the 3 domains. This analysis is borne out in the experiments. For the simpler datasets (Camelyon-17 and ISIC), Implicit performs equally well with higher variances compared to Explicit. In contrast, for FMoW, Explicit outperforms Implicit on Ensemble mode. This could be because the model does not have enough capacity to capture the invariance in an implicit setting. > Is it fair to compare IRM to your representations + MLP given that IRM is trained with a simple linear classifier? IRM and NW-probe ($NW^B_e$-learned representations + finetuned MLP) have exactly the same architecture. In IRM, both the feature extractor weights ($\varphi$) and the classifier weights ($w$) are trained simultaneously. In contrast, NW-probe is trained in two stages. In the first stage, we train the feature extractor weights ($\varphi$) only using an $NW^B_e$ training strategy (Eq. (7) or (8)). In the second stage, we freeze $\varphi$ and train a linear probe ($w$) on top of the frozen representations. NW-probe provides value as an academic exercise, as it can be directly compared with IRM; the causal assumptions and constraints are theoretically identical. Interestingly, we find that it outperforms IRM in the tasks we try, suggesting that $NW^B_e$ training strategy learns better invariant representations. > In figure 7 of the appendix, the rightmost accuracies for NWb (close to 90%) are substantially higher than those reported in table 2 (around 60%). What is the reason? The reason for this difference is that we report F1 scores in Table 2, but we report accuracies in Figure 7. As we mention in Lines 525-6, [2] finds that F1 score is not a good metric for comparing models with different label imbalances. In practice, we find graphs akin to Figure 7 but using F1 scores are not as striking as using accuracies. > Also regarding this figure 7, I am curious why the balanced plot increases quite fast as the test set becomes more and more imbalanced. It is a bit surprising given that the method consists in class-balancing at training isn’t it ? Have you investigated why? Note that in Fig. 7, a prevalence value of 0.25 indicates that 25% of samples in training are Y=0. Thus, the plot sweeps from low prevalence values for Y=0 (~25%) to high prevalence values (>75%). Note that the test-set prevalence is ~85%, so the right-most side of Fig. 7 shows performance when the train and test set prevalences are close. We agree with the reviewer that the Balanced (orange) plot increases as the prevalence of label Y=0 increases, although it doesn’t increase as quickly as Imbalanced (blue). Thus, while $NW^B$ mitigates label shift to an extent, increasing the prevalence of Y=0, thereby matching the train/test prevalences, leads to improved performance. However, we observe that the highest gap between the orange and blue lines are at low prevalence values. Thus, when prevalence values between train and test are maximally different, $NW^B$ proves to be a good robust predictor. > Also regarding balancing, I think that an additional baseline corresponding to ERM with class-balancing could be useful… In response to this comment, we have trained an ERM variant with balanced classes per environment for ISIC, which we denote $ERM^B$, and present the results below. | | F1 score | | ----------- | ----------- | | $ERM$ | 58.2 (2.9)| | $NW^B$, Ensemble|63.9 (3.8)| | $ERM^B$|63.0 (2.5)| We find that performance is on-par with $NW^B$. This is expected as the theoretical assumptions are the same for both models; that is, removing dependence of E on Y via class balancing. We plan to add these results to our revised manuscript. > How was the number of clusters set? A value of $k=3$ was chosen as it seemed to balance good error rate performance (see row 1 of Figure 2 in [1]) with computational efficiency. In [1], accuracy is shown to be stable for varying values of $k$. We plan to add this detail to our revised manuscript. > Despite the very interesting method and important questions addressed, I think the paper experimental part still needs a little work. We hope that our additional results and explanations regarding experimental details provide a clearer picture of our work. [1] Alan Q. Wang and Mert R. Sabuncu. A flexible nadaraya-watson head can offer explainable and calibrated classification. Transactions on Machine Learning Research, 2023. [2] Jan Brabec, Tomáš Komárek, Vojtech Franc, and Lukáš Machlica. On model evaluation under non-constant class imbalance, 2020. --- Rebuttal Comment 1.1: Title: Answer to authors Comment: I would like to thank the authors for their detailed and thoughtful rebuttal. My concerns were well addressed as long as the proposed clarifications are added to the manuscript. Just to make sure my comment concerning IRM is clear, I think that saying you “finetune a” and **linear classifier** instead of a "**fully connected classifier**“ in l.205 would be clearer and avoid doubts concerning fairness. --- Reply to Comment 1.1.1: Title: Response to reviewer d2Ui Comment: We thank the reviewer for their prompt response and positive feedback. We will be sure to change the wording in l.205 as suggested by the reviewer, as we agree this is clearer.
Rebuttal 1: Rebuttal: We thank the reviewers for their positive and constructive feedback. Here, we provide a general response to all reviewers, and provide a point-by-point response to each reviewer’s comment/questions below their corresponding review. ### Contributions As raised by reviewers 48ug and ​​xs7Q, we enumerate the contributions of our paper: 1. Our primary contribution is manipulation of the support set in an NW head during training for learning invariant representations, and for the broader purpose of domain generalization. To the best of our knowledge, ours is the first work to approach domain generalization in a non-parametric manner, as well as the first work to experiment with manipulating the support set to inject prior information into the training process. 2. Another (minor) contribution is that we motivate label shift and IRM simultaneously through d-separation in a causal DAG. To the best of our knowledge, ours is the first to model label shift as an intervention on the Y node in the DAG in Fig. 2a. In our experiments, we find that both lead to a performance boost. Our non-parametric method enforces both these constraints in one model. We clarify that the NW head itself is not a novel contribution, nor is our causal setup related to IRM (Fig. 2b), which has been proposed in many prior works. ### Advantages of the NW Head As raised by reviewers d2Ui and f7MY, we enumerate the advantages of NW over baselines and refer the reader to Section 6 of our paper for a discussion of its disadvantages: 1. The Implicit variant (NWbe-Implicit) has no hyperparameter to tune. This variant is competitive with and often outperforms state-of-the-art baselines which all require tuning a hyperparameter coefficient in the regularized loss. 2. The NW head enables interpretability by interrogating nearest neighbors in the feature space. Since these neighbors directly contribute to the model’s prediction (Eq. 2), interrogation enables a user to see what is driving the model’s decision-making. This not only allows for greater model transparency, but also enables interrogating the quality of the invariant features. See Figs. 8 and 9 in the included PDF and “Additional Changes” below. Note this is not possible with parametric baselines, see [1]. 3. Intuitively, we believe our non-parametric approach to enforcing invariance across environments is more natural and intuitive than baseline methods because an environment is encoded by manipulating the support set to contain *real samples* only from that environment. Other baseline methods resort to proxy methods to enforce invariance [3-5]. ### Additional Changes 1. Prompted by reviewer d2Ui, in Figs. 8 and 9 in the included PDF, we provide both a visual and quantitative exploration of the interpretability capabilities of the NW head. In Fig. 8, we show several query images and its 8 nearest neighbors in the feature space, for both $NW^B$ and $NW^B_e$ variants. We add a colored border around each neighbor to indicates which training environment it comes from. Interestingly, we notice that the neighbors for $NW^B_e$ come from a variety of environments (note a variety of colored borders), while the neighbors for $NW^B$ are less diverse. Fig. 9 quantifies this phenomenon. Thus, $NW^B_e$ leverages support images from a wider variety of environments to make its prediction, suggesting that it captures more invariant representations. 2. Prompted by reviewer xs7Q, we have experimented with 2 additional inference modes: k-NN and HNSW (Hierarchical Navigable Small Worlds), a fast approximate nearest neighbor algorithm [6]. These results are shown in Table 5 in the included PDF. We choose $k=20$ based on prior work [2]. HNSW is about 2x faster in total runtime on a GPU than full k-NN; we plan to add these findings to our computational results in Table 4. Overall, we observe that k-NN and HNSW perform nearly identically for all the datasets and variants. Additionally, both modes perform better in terms of mean performance on Camelyon-17, and perform on par with the best-performing modes for ISIC (Cluster) and FMoW (Ensemble). However, they generally have higher variances across model runs. We suspect that less total samples used in the support (20 vs. more than 1000 for Full) and the fact that not all classes are guaranteed to be represented in the support may lead to more unstable results. ### Further Directions Finally, we would like to highlight the future directions that our work may inspire. In general, training-time manipulation of a nonparametric support provides a novel means of imposing prior knowledge. Intuitively, every manipulation *creates a new classification problem*. In this work, we explored 2 such sources of prior knowledge: environment/domain knowledge, and assumption of balanced labels. We show that encoding these priors in a non-parametric way results in comparable to superior results compared to methods which encode priors via a regularized loss, and in certain cases obviates the need for tuning the associated hyperparameter. Further research may explore other ways to leverage this new degree of flexibility. As we mention in the Conclusion, an interesting way to exploit any prior knowledge of class imbalances might be to upweight the occurrence of samples for more prevalent labels during training. [1] Wang et al. A flexible nadaraya-watson head can offer explainable and calibrated classification. Transactions on Machine Learning Research, 2023. [2] Kotelevskii et al. Nonparametric Uncertainty Quantification for Single Deterministic Neural Network, 2022. [3] Wald et al. On calibration and out-of-domain generalization, 2021 [4] Sun et al. Deep coral: Correlation alignment for deep domain adaptation, 2016 [5] Krueger et al. Out-of-distribution generalization via risk extrapolation (rex), 2021. [6] Malkov et al. Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs, 2016. Pdf: /pdf/ebbbd17c82bdda3d0d9fa36b375973ca177538a1.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Probabilistic Exponential Integrators
Accept (poster)
Summary: This paper proposes a probabilistic integrator for solving stiff semi-linear ODEs that decouples linear and nonlinear components for improved accuracy. The approach uses an integrated Ornstein-Uhlenbeck process (IOUP) prior and posterior inference extends classical exponential integrator to exactly solve the linear components. Nonlinearities are approximated via local linearization similar to the extended Kalman filter (EKF). The authors present L-stability results and address computation through proposed approximations of the Jacobian that capture only the linear dynamics. Strengths: This is a solid paper; well-written, well-motivated, nice execution. The results aren’t groundbreaking when compared to baseline, but there are clear benefits of the proposed method, particularly when nonlinearities are less significant. Furthermore, results are presented honestly and limitations are clearly identified. Weaknesses: The paper fails to highlight any benefit of treating the initial value problem (IVP) as a Bayesian inference task. The posterior distribution *should* characterize uncertainty over the solution, and thus it is possible to quantify and report confidence in the solution. It is further possible to scrutinize the appropriateness of the IOUP as a prior. The paper does not address this, but instead prioritizes numerical accuracy and execution time. While these metrics are certainly important, they are not the primary motivation of probabilistic numerics. This is not a critical flaw, but perhaps a missed opportunity to demonstrate the benefit of a Bayesian approach to a numerical method. Unsurprisingly, the proposed method is fairly sensitive to nonlinearity of the ODEs. The experiment in Sec. 4.1 demonstrates that accuracy degrades as the quadratic term becomes more significant. In fact, the proposed method seems to be more sensitive to this than baseline IWP methods, albeit with uniformly lower error. For the simple ODE in Sec. 4.1 all methods achieve very low error (on the order of $10^{-10}$) so the observed improvement with the IOUP is marginal unless the ODE is highly linear. Some detailed comments: * Fig. 5 should reference subfigures (e.g. (a) is never explicitly mentioned). * L268: Typo "Since th problem..." Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: * The authors suggest that probabilistic integrators may be preferable for expensive-to-evaluate functions (L259). How? Can the method avoid explicit evaluation via prediction? * The EKF can be quite unreliable in even simple nonlinear dynamical systems, leading to preferred approaches such as the UKF. Would not such an approach be preferable here to avoid the local linear approximations? What is the main challenge in a more accurate approximation? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Limitations are clearly described in Sec. 5 Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We first want to thank the reviewer for their insightful comments, feedback, and questions. In the following we will address each point separately. > The paper fails to highlight any benefit of treating the initial value problem (IVP) as a Bayesian inference task. The posterior distribution should characterize uncertainty over the solution, and thus it is possible to quantify and report confidence in the solution. It is further possible to scrutinize the appropriateness of the IOUP as a prior. The paper does not address this, but instead prioritizes numerical accuracy and execution time. While these metrics are certainly important, they are not the primary motivation of probabilistic numerics. This is not a critical flaw, but perhaps a missed opportunity to demonstrate the benefit of a Bayesian approach to a numerical method. The reviewer brings up a fair point. However, we would like to point out the paucity of papers critically examining different priors. In fact, to our knowledge, this is the first to give a clear argument for using anything other than the canonical IWP prior. With this in mind, we hope the present contribution can spark interest in taking the prior selection more seriously in upcoming investigations by the PN community. > Unsurprisingly, the proposed method is fairly sensitive to nonlinearity of the ODEs. The experiment in Sec. 4.1 demonstrates that accuracy degrades as the quadratic term becomes more significant. In fact, the proposed method seems to be more sensitive to this than baseline IWP methods, albeit with uniformly lower error. The performance benefits of the probabilistic exponential integrator with IOUP prior over a solver with IWP prior depend compeletely on the known linear part opf the ODE. If the linear part is set to zero, they both coincide and the IOUP-based solver performs exactly as the solver with IWP prior. This behaviour is exactly what the experiment in Section 4.1 and Figure 3 aim to visualize, showing that as the linear part increasingly dominates the dynamics, the benefit of using an IOUP prior increases. Note also that in this experiment the IOUP prior never performs worse than the EKL&IWP combination, so we see this not as "sensitivity to nonlinearity" but rather as "leveraging the semi-linearity". > The authors suggest that probabilistic integrators may be preferable for expensive-to-evaluate functions (L259). How? Can the method avoid explicit evaluation via prediction? The comment in line 259 refers to choosing between the standard and Rosenbrock version of the proposed probabilistic exponential integrator, and the suggestion stems purely from the required number of function evaluations per step: Since the local linearization of the Rosenbrock method requires evaluating the vector field, the standard probabilistic exponential integrator requires less evaluations and thus might be preferrable when the vector field becomes exceedingly expensive to evaluate. But note that both methods perform at least one evaluation of the vector field per step. > The EKF can be quite unreliable in even simple nonlinear dynamical systems, leading to preferred approaches such as the UKF. Would not such an approach be preferable here to avoid the local linear approximations? What is the main challenge in a more accurate approximation? This is a very good point. It is indeed well known that the UKF or other filtering/smoothing methods can be advantageous over the EKF/EKS in certain state estimation problems. In the specific context of probabilistic ODE solvers, the UKF has been previously suggested [1,2], but a more extensive evaluation of its properties and utilities would certainly be interesting in order to better understand for which types of problems it would be most beneficial. Note also that in the context of probabilistic ODE solvers, a more accurate linearization not the only way to improve the accuracy of the method: we could also just select a smaller step size. This is not the case for most standard filtering problems. And in the regime of "very small" steps, a local Taylor approximation becomes increasingly accurate. [1] Kersting et al, "Active uncertainty calibration in Bayesian ODE solvers", UAI (2016) [2] Tronarp et al, "Probabilistic solutions to ordinary differential equations as nonlinear Bayesian filtering: a new perspective", Statistics and Computing (2019) --- Rebuttal Comment 1.1: Comment: Thanks. Your responses cleared up a couple points. I will keep my scores as-is and am willing to argue in favor of the paper.
Summary: This paper presents probabilistic exponential integrators as a solver for stiff semi-linear ODEs.Their solver can also be extended to solve general non-linear ODEs with iterative re-linearization. The proposed method is shown to be L-stable in theory and empirically more stable than existing methods. Strengths: The paper is well written. It tackles the problem of solving stiff systems, which is a great challenge in probabilistic numerics. The authors exploit the properties in semi-linear ODEs and develop a probabilistic solver that integrates the fast, linear dynamics into the prior model. The solver is also extended to general nonlinear ODEs via iterative linearization. Both theoretical and empirical analysis are sound. Weaknesses: The main difficulty of conventional of stiff ODEs solvers is its sensitivity to step size: small step size is required for stability. Proposition 3 shows the proposed solver is L-stable. However, the experiments mainly focus on final errors across various step sizes. A detailed investigation of stability (boundedness of solutions?) behaviors of solvers under large step size may help readers better understand the proposed approach.  Technical Quality: 3 good Clarity: 3 good Questions for Authors: See Weakness Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See Weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We first want to thank the reviewer for their insightful comments and feedback. > the experiments mainly focus on final errors across various step sizes. A detailed investigation of stability (boundedness of solutions?) behaviors of solvers under large step size may help readers better understand the proposed approach. Thank you for bringing this up. The main advantage of probabilistic exponential integrators as introduced in the paper is indeed their improved stability. A more detailed investigation purely focused on stability could have been helpful to make this message more clear; but we want to highlight that the improved performance for large step sizes is demonstrated in the experiments: In both Figure 4 and 5, the "error vs. step size" plots show that as step sizes increase, the non-exponential methods start to fail (either by diverging or achieving very large errors >>1) whereas the exponential methods still provide meaningful solutions; see for example step size $10^{-1}$ in both figures. This demonstrates the improved stability of the exponential integrators. --- Rebuttal Comment 1.1: Comment: Thanks for for response. I will keep my score
Summary: This paper proposed probabilistic exponential integrators, which is a new class of probabilistic solvers for stiff semi-linear ODEs. More specifically, the integrated OU process is introduced for a functional prior that directly incorporate the linear part of the dynamics. As a result, the proposed methods can be more stable than their non-exponential probabilistic counterparts. In addition, the authors also provide an extension for general non-linear systems, which can be viewed as a probablistic solver based on exponential Rosenbrock-type methods. Strengths: 1. The paper is overall clearly written and nicely organized. 2. The IOUP prior is neat. Also, some theoretical results of the proposed probabilistic exponential integrator have been established (e.g., equivalence to the classic exponential trapezoidal rule and L-stability). Weaknesses: 1. The proposed methods are only for semi-linear ODEs, while other baselines can be applied to more general cases. 2. The IOUP prior is more expensive than the IWP prior. 3. The Kalman filter/smoother based solver has cubic scaling in the ODE dimension. 4. It seems that the advantage of the method is more significant when the step size is large (error is large), and becomes negligible when the step size is small. Note that 2 and 3 have been adimitted by the authors. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What is the overhead computational time for computing the matrix exponentials? Is it contained in the runtime? 2. The proposed method can be more accurate when the linear part is more dominant. Since EK1 also introduces a linear approximation, would it perform better when combined with the IOUP prior? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, they do. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We first want to thank the reviewer for their comments, feedback, and questions. In the following we will address each open point separately. > Weakness 1. The proposed methods are only for semi-linear ODEs, while other baselines can be applied to more general cases. The main "probabilistic exponential integrator" proposed in the paper is indeed only fully applicable to semi-linear ODEs. But, as also stated in the Summary section of your review, the proposed Rosenbrock-type method of Section 3.6 is applicable to general non-linear ODEs. > Weakness 4. It seems that the advantage of the method is more significant when the step size is large (error is large), and becomes negligible when the step size is small. This is a very good point, and this is precisely why "stability" is so important for numerical ODE solvers. If the step size is "small enough", even the simplest forward Euler method eventually computes an accurate solution---but the problem is that for extremely small steps the runtime becomes prohibitively large. By using more stable solvers, such as exponential integrators, accurate solutions can be computed with larger step sizes. > Question 1. What is the overhead computational time for computing the matrix exponentials? Is it contained in the runtime? The cost of computing the matrix exponential depends on the drift matrix and its size, and thus on the specific ODE and the order of the prior. For fixed step sizes, the proposed probabilistic exponential integrator computes _one_ matrix exponential, and then re-uses the computed transition matrices for each step. Thus, it computes exaclty one matrix exponential more than a solver with IWP prior. The Rosenbrock-type method however needs to compute a matrix exponential _at each timestep_ as the linear part changes at each step, and is thus computationally more demanding, but also more stable, as shown in Figured 4 and 5. The reported runtimes include all parts of the solver and thus also the computation of the matrix exponential. > Question 2. The proposed method can be more accurate when the linear part is more dominant. Since EK1 also introduces a linear approximation, would it perform better when combined with the IOUP prior? This is a very good question. Indeed, the proposed method can also be combined with the exact first-order Taylor linearization (EK1). To keep the number of methods in the experiments and figures reasonably small, we have chosen to combine the probabilistic exponential integrator only with the EKL (thus not requiring any additional linearization during the solve), and the Rosenbrock-type method with the EK1 (as both the Rosenbrock-type method and the EK1 require local linearizations anyways); but both solvers can be combined with the different linearization strategies mentioned in Section 3.3. In experiments not included in the paper, we have also tested the non-Rosenbrock method with the EK1, and we have observed that this can be more accurate than the EKL---but with the additional cost/downside of requiring local linearizations. --- Rebuttal Comment 1.1: Title: Thanks for the reply Comment: Thanks for the response. I will keep my score.
Summary: The paper studies classical exponential integrators using the framework of probabilistic numerics, where Bayesian formalism is used to study numerical approximations of deterministic dynamical systems. The main result seems to be Proposition 1, which establishes exponential stability (called L-Stability for some reason) property for the exponential integrator. Strengths: The paper reasonably well introduces probabilistic numerics and studies this in the context of ODEs and tackles important notion of stability of numerical approximation. Weaknesses: The paper transfers well-understood properties of numerical approximations of ODEs using the formalism of probabilistic numerics. In my view, this is not a sufficiently novel contribution. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - Calling the Ornstein-Uhlenbeck process a Gauss-Markov in eqn 3 is overkill. - Notation in eqn 5 of normal distribution not clear what. Why three parameters? - Section 2.2 I don't see why \mathcal I [y] = 0 when y is a solution to ODE. This says that y_t-f(y_t,t)=0 not \frac{d}{dt} y_t -f(y_t,t)=0 - Explain the role of Z in 9b - Paper seems to treat ODEs, but some numerical examples are derived for PDEs. Of course, semi-group theory for PDEs links with exponential integrators, but this seems to go beyond this paper. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Methodology applies to a particular class of ODES Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We first want to thank the reviewer for their comments, feedback, and questions. In the following we will address each open point separately. > Calling the Ornstein-Uhlenbeck process a Gauss-Markov in eqn 3 is overkill. Probabilistic numerical ODE solvers have in general been established with any Gauss--Markov prior that allows for easy access of derivatives [1]. Since section 2 is the background section of the paper, we introduced ODE filters in their generality. > Notation in eqn 5 of normal distribution not clear what. Why three parameters? The first entry of `N(x; m, C)` is not a parameter, but the variable that is described by the Gaussian. This is just a slighly more thorough notation than the equivalent short-hand notation `N(m, C)`. To prevent confusion, we will make sure that the notation is consistent accross the paper. > Section 2.2 I don't see why \mathcal I [y] = 0 when y is a solution to ODE. This says that y_t-f(y_t,t)=0 not \frac{d}{dt} y_t -f(y_t,t)=0 $E_1$ is a selection matrix that selects the entry that corresponds to the first derivative, and similarly $E_0$ selects the zeroth derivative. In terms of the Gauss--Markov state representation $Y$, $\mathcal{I}$ therefore encodes exactly the ODE. The statement $\mathcal{I}\[y\] \equiv 0$ comes with a slight abuse of notation. We could also define $\mathcal{I}$ not via the "state" $Y$ as in eq. (7), but in the space of the $d$-dimensional function $y$, as $\mathcal{I}\[y\](t) = D y(t) - f(y(t), t)$, where $D$ is the derivative operator. On $Y^{(0)}$ this then gives $\mathcal{I}\[Y^{(0)\}](t) = D Y^{(0)}(t) - f(Y^{(0)}(t), t) = Y^{(1)}(t) - f(Y^{(0)}(t), t)$. This formula is equivalent to the formulation in the paper (eq. 7) and, since the algorithm only operates on the state $Y$, it also closer to the actual implementation of the method. > Explain the role of Z in 9b $Z_n$ formally describe the data, _before_ it is actually observed. Then, when we add the actual data to the model as defined in eq. (9), we can do (approximate) Bayesian inference. This notation with prior and likelihood model is standard in Bayesian filtering and smoothing (and Bayesian inference in general). In the specific case of probabilistic ODE solvers the data is exactly zero everywhere, since we want the ODE to hold on the grid. > Paper seems to treat ODEs, but some numerical examples are derived for PDEs. The method of lines can be used to transform semi-linear PDEs into semi-linear ODEs, as we did in the experiment section. The semi-linear ODE is then solved with the proposed method. Note that spatial discretizations of PDEs are also a very common example to motivate exponential integrators; see for instance the introduction of [2]. > Methodology applies to a particular class of ODES The goal of the paper was indeed to develop methods particularly for stiff semi-linear ODEs. And in the context of numerical ODE solvers, developing specific methods for specific problems is quite common. We still want to highlight Section 3.6 of our paper: The proposed exponential integrator can also be applied to any nonlinear ODE by automatically and continuously linearizing the problem, in a manner that is very similar to classic exponential Rosenbrock methods. [1] Tronarp et al, "Bayesian ODE solvers: the maximum a posteriori estimate", Statistics and Computing (2021) [2] Hochbruck et al, "Exponential Rosenbrock-Type Methods", SIAM Numerical Analysis (2009) --- Rebuttal Comment 1.1: Comment: Thank you for clarifying a few points and answering my questions. As a result of these, I increased my score by 1.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Bilevel Coreset Selection in Continual Learning: A New Formulation and Algorithm
Accept (poster)
Summary: Past works have limitations in terms of scalability, formulation approximation, or performance. This paper offers an efficient coreset selection problem with provable theoretical guarantees. That is, the authors solve a bilevel optimization on a probability distribution over the dataset with loss minimization on the selected dataset that is guided into a low-dimensional manifold via a smoothed top-K loss as a regularizer on the probability distribution. Strengths: The approach is simple, yet well-motivated by the limitations of previous works. The writing addresses the limitations well and is easy to follow. Weaknesses: (1) Further analysis into the top-K loss and its causal effects would aid in understanding the mechanics of probability distribution being regularized into a low-dimensional manifold. (2) comparisons with state-of-the-art non-coreset replay methods (e.g., using stability plasticity scores [1], contrastive representation based selection [2]), as it would be meaningful to see the prospect of this direction. (3) [minor] main illustration figure 1 could include more technical information (embedding some important eqns for example) or include an additional figure to guide the reader better. [1] Sun et al, Exploring Example Influence in Continual Learning, NeurIPS 2022 [2] Kim et al, Continual Learning on Noisy Data Streams via Self-Purified Replay, ICCV 2021 Technical Quality: 3 good Clarity: 2 fair Questions for Authors: if Weaknessses (1) could be further analyzed and shown, as well as comparison with (2) other state of the art rehearsal baselines comparison would be informative. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Not mentioned in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer b44u, Thanks for your reviews. We have addressed your concerns below. **Q1: Further analysis into the top-K loss and its causal effects would aid in understanding the mechanics of probability distribution being regularized into a low-dimensional manifold.** **A1**: We guess you mean the top-K regularizer. To further analyze the effects of Top-K regularizer, we conduct the ablation study with different values of regularizer coefficient $\lambda$. The performance results with different $\lambda$ are shown in Table 4 and the corresponding average Top-K summations of coreset weights are in Table 5. In our experiment, there are $50$ candidate samples in each mini-batch data, and the summation of $50$ coreset weights is equal to $1.00$. Top-K summation of weights increases as $\lambda$ increases, which imposes higher probabilities on the Top-K entries and lower probabilities on the rest candidates. The best performance is achieved when $\lambda=0.1$, which means $\lambda$ balances the trade-off between the loss function and regularizer strength: if $\lambda$ is too large, the algorithm primarily focuses on choosing the important samples instead of updating the model parameter, and vice versa. **Table 4. Ablation study for the regular coefficient $\lambda$** | Measure | $\lambda$=0.01 | $\lambda$=0.05| $\lambda$=0.1 | $\lambda$=0.5 | $\lambda$=1 | |-------------|--------------------|-------------------|----------------------|------------------|------------------| | ACC | 59.37±0.35 | 60.23±0.43 | **61.60±0.14** | 59.42±1.45 | 58.89±1.64 | | FGT | 0.095±0.098 | 0.074±0.054 | **0.051±0.015** |0.138±0.075 | 0.128±0.076 | **Table 5. TopK summation of coreset weights (K=10)** | Measure | $\lambda$=0.01 | $\lambda$=0.05 | $\lambda$=0.1 | $\lambda$=0.5 | $\lambda$=1 | |-------------|--------------------|--------------------|-------------------|------------------|-----------------| | Sum | 0.41/1.00 | 0.56/1.00 | 0.63/1.00 | 0.73/1.00 | 0.84/1.00 | **Q2: Comparisons with state-of-the-art non-coreset replay methods (e.g., using stability plasticity scores [1], contrastive representation based selection [2]), as it would be meaningful to see the prospect of this direction.** **[1] Sun et al, Exploring Example Influence in Continual Learning, NeurIPS 2022 [2] Kim et al, Continual Learning on Noisy Data Streams via Self-Purified Replay, ICCV 2021** **A2**: We compare two state-of-the-art replay-based methods MetaSP [1] and SPR [2] on Split CIFAR-100, Multiple Datasets, and Tiny-ImageNet. The results are shown in Table 6, 7, and 8 respectively. Our method BCSR outperforms others on balanced, imbalance, and noise settings. In the imbalanced dataset, BCSR demonstrates relatively higher performance. The reason is that the algorithms in [1] and [2] do not select the most relevant examples in the replay member while our coreset selection method BCSR chooses the most informative samples and saves them into the replay buffer according to the model parameter. **Table 6. Experiments on Split CIFAR-100** | Methods | Balanced | | Imbalanced | | Label Noise | | |----------------------|---------------|-----------|-----------------|----------|------------------|-----------| | | AVG ACC | FGT | AVG ACC | FGT | AVG ACC | FGT | | MetaSP [1] | 60.14±0.25 | 0.056±0.23 | 43.74±0.36 | 0.079±0.014 | 57.43±0.54 | 0.086±0.007 | | SPR [2] | 59.56±0.73 | 0.143±0.064 | 44.45±0.55 | 0.086±0.023 | 58.74±0.63 | 0.073±0.010 | | BCSR | **61.60±0.14** | **0.051±0.015** | **47.30±0.57** | **0.022±0.005** | **60.70±0.08** | **0.059±0.013** | **Table 7. Experiments on Multiple Datasets** | Methods | Balanced | | Imbalanced | | Label Noise | | |----------------------|---------------|------------|-----------------|----------|------------------|-----------| | | AVG ACC | FGT | AVG ACC | FGT | AVG ACC | FGT | | MetaSP [1] | 57.14±1.10 | 0.113±0.042 | 41.32±1.50 | 0.103±0.053 | 47.14±1.66 | 0.081±0.027 | | SPR [2] | 56.20±1.91 | 0.124±0.036 | 40.79±1.73 | 0.143±0.051 | 49.77±1.58 | **0.062±0.024** | | BCSR | **59.89±0.95** | **0.096±0.005** | **45.13±0.54** |**0.046±0.008** | **49.97±1.14** | 0.064±0.031 | **Table 8. Experiments on Tiny-ImageNet** | Methods | Balanced | | Imbalanced | | Label Noise | | |----------------------|---------------|------------|-----------------|----------|------------------|-----------| | | AVG ACC | FGT | AVG ACC | FGT | AVG ACC | FGT | | MetaSP [1] | 43.33±0.32 | 0.127±0.002 | 36.75±0.57 | 0.086±0.006 | 37.18±0.76 | 0.068±0.007 | | SPR [2] | 42.79±0.50 | **0.102±0.009** | 36.55±0.74 | 0.070±0.026 | 39.89±0.53 | 0.065±0.021 | | BCSR | **44.13±0.33** | 0.106±0.001 | **38.59±0.11** |**0.047±0.004** | **40.72±0.56** |**0.055±0.006** | **Q3: [minor] main illustration Figure 1 could include more technical information (embedding some important eqns for example) or include an additional figure to guide the reader better.** **A3**: Thank you for your suggestion. We will add a description under the figure in the final version to draw the connection from texts to Equation 1 and pytorch-style pseudocode in Algorithm 1. --- Rebuttal Comment 1.1: Comment: Thank you for responding to the concerns. Without some experimental details for SPR and MetaSP, I remain skeptical of the results as the performance gap is minute. --- Reply to Comment 1.1.1: Title: Experimental details Comment: Thanks for your response. Let us provide more details for the experimental settings. MetaSP[1] and SPR[2] show good performance among reply-based methods, but the experimental setups are a bit different due to the coreset selection compared with their original paper. 1) The memory buffer of coreset-based is small. Note that the idea of coreset selection is to find the minimal coreset which stands for the most representative examples from the data stream in each task. In SPR, there are two buffers: the delayed buffer D temporarily stocks the incoming data stream, and the purified buffer P maintains the cleansed data. To satisfy the requirement of coreset experiments, we set the size of both buffers as 100 on Split CIFAR-100, 200 on Split Tiny-Imagenet, and 83 on Multiple Dataset. The streaming batch size is set to 10 on Split CIFAR-100 and Multiple Dataset, and 20 on Split Tiny-Imagenet. These buffer sizes and batch sizes follow the settings in coreset-based methods [3,4]. We keep the same buffer and batch size in MetaSP[1], which actually is a big challenge for non-coreset methods. 2) Data setting is challenging. The experimental data include balanced, imbalanced, and noise-label. We follow [5] to transform the original dataset to an imbalanced long-tailed CIFAR-100, and set a 20% noise rate for random label shift on three datasets. BCSR outperforms other methods on 3 benchmarks, especially on the imbalanced case (e.g. 6.03% improvement compared to SPR, 8.24% improvement compared to MetaSP on CIFAR-100). 3) Details of other hyperparameters. It is worth noting that the training model (called base model in SPR) which is used for evaluation traverses data of the current task only once. The learning rate for model training is set as 0.15 on CIFAR-100, 0.20 on Tiny-ImageNet, and 0.10 on Multiple Datasets. Other experimental parameters for sample selection (including computation of example influence in MetaSP) and buffer updating (e.g., self-centered filtering for current data) are kept as their own settings. [3] Jaehong Yoon, Divyam Madaan, Eunho Yang, and Sung Ju Hwang. Online coreset selection for 489 rehearsal-based continual learning. arXiv preprint arXiv:2106.01085, 2021. [4] Borsos, Zalán, Mojmir Mutny, and Andreas Krause. "Coresets via bilevel optimization for continual learning and streaming." Advances in neural information processing systems 33 (2020): 14879-14890. [5]Yin Cui, Menglin Jia, Tsung-Yi Lin, Yang Song, and Serge Belongie. Class-balanced loss based on effective number of samples. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9268–9277, 2019.
Summary: The authors present a new approach to coreset selection in rehearsal-based continual learning. The authors claim that traditional methods optimise over discrete decision variables, resulting in computationally expensive processes. To address this, they propose a new bilevel formulation where the inner problem finds a model that minimizes expected training error, and the outer problem learns a probability distribution with approximately K nonzero entries through adding a regularizer and ensuring convergence to the ϵ-stationary point with O(1/ϵ^4) complexity. The authors also perform extensive experiments demonstrate superior performance compared to existing methods in various continual learning settings. Strengths: 1. The availability of provided code facilitates reproducibility, making it a straightforward process. 2. The authors present experimental results that demonstrate a pretty good performance in comparison to the selected baselines. 3. The inclusion of the added regularizer seems promising, but it requires a more detailed and elaborate explanation to justify its contribution adequately. Weaknesses: **Weaknesses** My main concern is that this work appears to be primarily an incremental extension of prior work [2] by adding a regularizer. While this is not necessarily negative, I do not see any significant new improvements from an algorithmic perspective. Specifically, in lines 46-52, the authors assert that the key challenges of bilevel coreset selection are (1) the expensive nature of optimization over cardinality constraints and (2) the nested nature of bilevel optimization. However, these weaknesses pertain to [1] rather than the coreset selection problem itself. In order to address these challenges, [2] was proposed. The contributions that the authors claim in this paper seem to stem from [2] rather than the proposed method itself. **Suggestion:** 
 Certainly, enhancing the motivation of the paper and clarifying the advantages of the proposed method compared to [7] is crucial for improving the paper. The author should allocate more space to address these aspects. [1] Borsos, Z., Mutny, M., & Krause, A. (2020). Coresets via bilevel optimization for continual learning and streaming. Advances in neural information processing systems, 33, 14879-14890. [2] Zhou, X., Pi, R., Zhang, W., Lin, Y., Chen, Z., & Zhang, T. (2022, June). Probabilistic bilevel coreset selection. In International Conference on Machine Learning (pp. 27287-27302). PMLR. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Could you please clarify the differences and advantages between your method and [7]? In lines 135-139, the author simply states that "so this formulation [7] oversimplifies the coreset selection problem." However, this statement does not highlight any specific disadvantages of [7]. I will consider improving my score if my question and concern could be well-addressed. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your reviews. We have addressed your concerns below. **Q1: Could you please clarify the differences and advantages between your method and [2]? In lines 135-139, the author simply states that "so this formulation [2] oversimplifies the coreset selection problem." However, this statement does not highlight any specific disadvantages of [2].** **[1] Borsos, Z., Mutny, M., & Krause, A. (2020). Coresets via bilevel optimization for continual learning and streaming. Advances in neural information processing systems, 33, 14879-14890.** **[2] Zhou, X., Pi, R., Zhang, W., Lin, Y., Chen, Z., & Zhang, T. (2022, June). Probabilistic bilevel coreset selection. In International Conference on Machine Learning (pp. 27287-27302). PMLR.** **A1**: Thank you for this insightful question. We will improve our presentation, especially the comparison with [2]. The challenges of bilevel continual learning in [1] are: 1) performing optimization over discrete decision variables with greedy search; 2) the nested structure of bilevel problems. Both [2] and our method BCSR relax the discrete bilevel optimization problem into a continuous one. Our approach improves over [2] due to the following differences. 1) Zhou et al. [2] relaxes the bilevel formulation to minimize the loss function over the Bernoulli distribution $s$, i.e., $\min_{s\in \mathcal{C}} \Phi(s)$, and develop a policy gradient solver to optimize the Bernoulli variable. Their gradient $\nabla_{s}\Phi(s) = E_{p(m|s)}\Big[L(\theta^*(m))\nabla_{s}\ln p(m|s)\Big]$ discards the implicit gradient of $L(\theta^*(m))$ in terms of $s$. However, $\theta^*(m)$ actually depends on the mask $m$, and $m$ depends on the Bernoulli variable $s$. Therefore there is an implicit gradient because changes in $s$ will cause changes in $\theta^*(m)$: this fact is ignored by Zhou et al. [2] and hence it is a disadvantage. In contrast, our bilevel optimization computes the hypergradients for the coreset weights $w$ ($0\leq w \leq 1$ and $\|w\|_1 =1$ ), which considers the implicit dependence between $\theta^*(w)$ and $w$. The experiment results also prove that our method is much better than Zhou et al. [2] for coreset selection. 2) Zhou et al. [2] assumes that the inner loop converges to $\theta^*(m)$ exactly, which oversimplifies the analysis and may not hold in practice. In contrast, we carefully analyze the gap between estimated $\hat{\theta}(m)$ by our algorithm and the minimizer of the inner problem $\theta^*(m)$. Please note that we use $m$ (i.e., sample mask) in the rebuttal to follow the notation in Zhou et al. [2] but our paper uses notation $w$ to denote coreset weights (which is equivalent to sample mask). 3) Our new bilevel formulation introduces a novel smoothed Top-K regularizer, which is important as shown in our ablation studies (Table 6 and Section G in the original manuscript). In contrast, [2] does not use such a regularizer. Indeed, we show empirically that our algorithm BCSR is better than Zhou et al. [2] in a wide spectrum of benchmark datasets, as illustrated in Table 1~4 in the main text. --- Rebuttal 2: Title: Looking forward to post-rebuttal feedback! Comment: Dear Reviewer vGFs, Thank you for reviewing our paper. We have carefully addressed your concerns regarding the clarification of the comparison with the reference [7]. Please let us know if our response addressed your concerns about the presentation. If our response resolves your concerns, we kindly ask you to consider raising the rating of our work. Thank you very much! We are happy to discuss any additional questions you may have. [7] Zhou, X., Pi, R., Zhang, W., Lin, Y., Chen, Z., & Zhang, T. (2022, June). Probabilistic bilevel coreset selection. In International Conference on Machine Learning (pp. 27287-27302). PMLR. Best, Authors --- Rebuttal Comment 2.1: Comment: Thanks for your feedback. I will raise my score. I think most of my concerns are addressed.
Summary: This work addresses the coreset selection problem in rehearsal-based continual learning, focusing specifically on the application of bilevel optimization. The authors identify limitations in existing bilevel optimization-based coreset selection methods for continual learning, including high computational costs resulting from greedy search and the loss of bilevel optimization nature due to single-level equivalence. To overcome these drawbacks, the authors propose a new formulation that incorporates the probability simplex and a smoothed top-K regularizer. The latter enforces the K most important elements have larger weights. The authors develop a new stochastic bilevel method for this continuous and smooth loss. The effectiveness of the proposed method is demonstrated through comprehensive experiments, which also include various ablation studies. The authors establish the properties of the new loss function and provide guarantees on convergence. Strengths: 1. Both continual learning and bilevel optimization are timely topics.The exploration of the benefits of bilevel optimization in continual learning is an under-explored area, making the studied topic in this work both interesting and important. 2. The paper is well written and easy to follow. The inclusion of Figure 1, illustrating the process of training bilevel algorithms in continual learning, along with PyTorch-style pseudocode, greatly aids in understanding the entire process. 3. The introduction of the new loss function and bilevel algorithms is well-motivated. The replacement of the nuclear norm with a smoothed top-K regularizer, as proposed in the paper, is a good idea that leads to faster and improved bilevel optimization algorithms. The empirical evidence provided in Figure 3 serves as a strong justification for the efficacy of the new loss function and algorithms. 4. The experiments conducted in the paper are comprehensive. The evaluation encompasses various datasets, including multiple datasets, split Cifar100, and large-scale datasets like Tiny-ImageNet and Food-101. Additionally, various ablation studies are also included. 5. The paper establishes guarantees on convergence rate and explores the properties of the new loss function. Weaknesses: The proposed algorithm needs to compute the Hessian-vector products, which may be computation expensive in large-scale cases. This computational expense may pose challenges and limit the applicability of the algorithm. Is it possible to improve the efficiency of the proposed method by utilizing Hessian-free bilevel algorithms based on recent advancements (e.g., Liu et al. [1], Sow et al. [2])? [1] Liu, Bo, et al. "Bome! bilevel optimization made easy: A simple first-order approach." Advances in Neural Information Processing Systems 35 (2022): 17248-17262. [2] Sow, Daouda, Kaiyi Ji, and Yingbin Liang. "On the convergence theory for hessian-free bilevel algorithms." Advances in Neural Information Processing Systems 35 (2022): 4136-4149. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How to select the parameters such as N, Q and $\delta$ in Algorithm 3 for continual learning? It would be beneficial to provide suggestions or guidelines for choosing these parameters and to discuss the sensitivity of the algorithm's performance to such parameters. 2. What are the main challenges of the bilevel optimization analysis in the continual learning setting? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer AXy9, Thanks for your reviews. We have addressed your concerns below. **Q1: Is it possible to improve the efficiency of the proposed method by utilizing Hessian-free bilevel algorithms based on recent advancements?** **A1**: Yes, two types of Hessian-free methods may be considered. The first approach is to approximate the Hessian- and Jacobian-vector products via finite-difference estimation, i.e., using $[\nabla f(x+\alpha v)-\nabla f(x-\alpha v)]/(2\alpha)$ to approximate $\nabla^2 f(x)v$. Then, the entire process will not contain second-order information. The second approach is to apply the recent value-function-based problem reformulation and constrained optimization-based approaches [1]. We will investigate these ideas in future studies. [1] B. Liu et al. “Bome! bilevel optimization made easy: A simple first-order approach.” NeurIPS 2022. **Q2: How to select the parameters such as N, Q, and $\delta$ in Algorithm 3 for continual learning? It would be beneficial to provide suggestions or guidelines for choosing these parameters and to discuss the sensitivity of the algorithm's performance to such parameters.** **A2**: We conduct ablation studies to explore the sensitivity of hyperparameters ($N$ and $Q$) on Split CIFAR-100. The results are presented in Tables 2 ($N$) and 3 ($Q$). The model performance remains relatively stable when increasing inner loops ($N$) while fixing $Q$. But too large $N$ ($N\geq 15$) leads to performance degradation due to overfitting. The $Q$ loops show similar properties that a few $Q$ loops (e.g., $Q=3$) are enough to approximate the Hessian inverse product. Too small $Q$ or too large $Q$ will hurt the performance due to possible underfitting (e.g., $Q=1$) or overfitting (e.g., $Q=20$). $\delta$ is a factor for Gaussian noise. It is usually small enough for Gaussian smoothness, so it is fixed as $1e-3$ in the experimental setting. **Table 2. Ablation study for the inner loops (N) with fixed loops $Q=3$.** | Measure | N=1 | N=5 | N=10 | N=15 | N=20 | |-------------|------------------|------------------|-----------------|------------------|------------------| | ACC | 61.60±0.14 | 61.75±0.11 | 61.64±0.15 | 60.77±0.32 | 59.20±0.41 | | FGT | 0.051±0.015| 0.047±0.013| 0.063±0.017 | 0.074±0.021 | 0.079±0.035 | **Table 3. Ablation study for the loops Q with fixed inner loops $N=1$.** | Measure | Q=1 | Q=3 | Q=5 | Q=10 | Q=20 | |-------------|----------------------|-----------------|-------------------|------------------|------------------| | ACC | 52.14±1.53 | 61.60±0.14 | 61.57±0.15 | 58.42±0.53 | 57.80±1.31 | | FGT | 0.123±0.038 |0.051±0.015 | 0.064±0.012 | 0.173±0.045 | 0.162±0.041 | **Q3: What are the main challenges of bilevel optimization analysis in the continual learning setting?** **A3**: Existing works on bilevel optimization mainly focus on the unconstrained case, and the analysis often relies on the continuity between two adjacent iterates. As a comparison, our analysis in CL setting needs to deal with the constraint over the simplex, and it is non-trivial to handle the possible discontinuity caused by the projection. Another challenge lies in showing the smoothness of the problem as well as a hypergradient estimation error under the stochastic Top-K regularizer. --- Rebuttal Comment 1.1: Comment: Thanks for the response.
Summary: The paper presents a better approach for grasping the coreset-based bi-level optimization procedure used in the field of continual learning. The proposed method relies on the use of the proxy model to obtain a coreset, which in turn will affect the training of the original model. With that being said, the idea relies on relaxing the hard cardinality constraints usually used in the field of coreset construction for continual learning tasks, to a softer version that allows the use of first-order methods. Strengths: In what follows, a list of the strengths of the presented paper is given: * The paper draws back from the hard cardinality constraint and follows a relaxed version of it by resolving to the use of probability simplex and the top-K smoothed regularization to mimic the effect of the lost cardinality constraint. * The method is easy to follow, as well as it relies on the use of a proxy model. * While the proposed model is among the top 5 slowest from the list of competitors (referring to Table 8 in the appendix), it is by far the best method in terms of accuracy and forgetting almost across the board. * The new formulation of the problem allows for dropping the use of combinatorial optimization techniques while exposing the field to first-order optimizers. * Finally, the paper is well-written and easy to follow. Weaknesses: One problem that I have in mind is the use of the proxy model, which might be a limitation in the presence of large models, i.e., models containing $> 100M$ parameters. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Is it possible to circumvent the need for a proxy model in the case of working with large models? How worse would the results be if one would use some quantized or pruned version of the original model $M_{tr}$? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors listed the limitations associated with their theoretical analysis in the form of assumptions usually used in the field of coresets for continual learning. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer FPqc: Thanks for your reviews. We have addressed your concerns below. **Q1: The proxy model $M_{cs}$ might be a limitation in the presence of large models, i.e., models containing >100M parameters. Is it possible to circumvent the need for a proxy model in the case of working with large models? How worse would the results be if one would use some quantized or pruned version of the original model $M_{tr}$?** **A1**: It is feasible to use only one model for the training and coreset selection phases. After each training phase, the model parameters can be saved in the disk temporarily and then the model continues to conduct coreset selection. In the next model training phase, the saved model parameters are loaded into the current model again. This method can handle large models but with an extra overload of loading and checkpointing. In addition, we apply the quantization technique [1] to the models $M_{cs}$ and $M_{tr}$ with lower bits. Specifically, we keep the full precision of the model $M_{tr}$ during the learning of each task, but the model $M_{tr}$ is quantized upon finishing one task. In particular, the gradients are calculated on quantized model parameters (INT16, INT8 and INT4) to perform gradient-based updates. After several steps of updates upon finishing the training of one task, the full precision model is quantized and saved. Then the quantized model is loaded to $M_{cs}$ for coreset selection. The above process repeats until finishing all the tasks. Finally, the quantized model $M_{tr}$ is used for performance evaluation. The results (ACC, FGT, quantized model size) are shown in Table 1. Specifically, models are quantized to INT16, INT8 and INT4 respectively, but with a small performance degradation (e.g., reduced by up to $4.1$% for INT4). However, the model size shrinks from $3.76$MB to $1.67$MB (INT16), $0.93$MB (INT8) and $0.61$MB (INT4), respectively, which reduces the memory footprint significantly (e.g., up to $83.8$% decrease for INT4). The results demonstrate that the quantized model achieves comparable performance to the original full precision model for the coreset selection but with a much smaller model size. That provides some useful insights into a quantized version of a large model applied in continual learning. **Table 1. The performance of the quantized model of ResNet18 on Split CIFAR-100 (FP32 means the original model with full precision).** | Param Type | ACC | FGT | Model Size (MB)| |------------------|------------------|--------------------|-----------------------| | FP32 | 61.60±0.14 | 0.051±0.015 | 3.76 | | INT16 | 60.03±0.16 | 0.064±0.017 | 1.67 | | INT8 | 59.12±0.21 | 0.081±0.024 | 0.93 | | INT4 | 59.06±0.15 | 0.085±0.027 | 0.61 | [1] Polino A, Pascanu R, Alistarh D. Model compression via distillation and quantization[J]. arXiv preprint arXiv:1802.05668, 2018. --- Rebuttal Comment 1.1: Comment: Thanks for clarifying my concern.
Rebuttal 1: Rebuttal: **General Response:** We would like to thank all reviewers for their constructive comments. We have answered the corresponding questions from reviewers and provided new experiment results requested by reviewers. The main summary of our responses includes: 1. Per Reviewer FPqc suggestion: - We have added the quantized experiments for models $M_{tr}$ and $M_{cs}$, and compared the performance (ACC and FGT) between quantized models and original models in continual learning. The descriptions of experimental details are included in our response. We show that our quantized models do not lose too much performance but require much smaller memory. 2. Per Reviewer AXy9 suggestion: - We have cited and discussed Hessian-free methods, which can be used to improve the efficiency of our method. - We have conducted ablation experiments for hyperparameters $N$, $Q$ for sensitivity analysis, and stated the details for $\delta$ setting. - We have explained the main challenges of bilevel optimization analysis in continual learning: 1) bilevel optimization on the constrained case; 2) smoothness of the problem. 3. Per Reviewer vGFs suggestion: - We have clarified the main differences and advantages between our method and Zhou et al. [ref1]. [ref1] Zhou, X., Pi, R., Zhang, W., Lin, Y., Chen, Z., & Zhang, T. (2022, June). Probabilistic bilevel coreset selection. In International Conference on Machine Learning (pp. 27287-27302). PMLR. 4. Per Reviewer b44u suggestion: - We have further analyzed the effects of Top-K regularizer and conducted ablation experiments for regular coefficient $\lambda$. - We have added experiments for comparisons with state-of-the-art non-coreset replay methods [ref2] and [ref3]: the results are included in new tables in the rebuttal. - We will add a description under the figure in the final version to draw the connection from texts to equation 1 and pytorch-style pseudocode in Algorithm 1. [ref2] Sun et al, Exploring Example Influence in Continual Learning, NeurIPS 2022 [ref3] Kim et al, Continual Learning on Noisy Data Streams via Self-Purified Replay, ICCV 2021
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Efficient Online Clustering with Moving Costs
Accept (spotlight)
Summary: The paper considers the $k$-median clustering problem in the online learning framework. More specifically, at the beginning of each round, the algorithm needs to maintain a set of $k$ centers (or facilities). After this, the actual set of clients is revealed. The algorithm incurs a cost equal to the total assignment cost of each client to the nearest center. At the next round, the algorithm can change the location of the $k$ centers, but then it also has to pay for the movement cost of the centers (by the min-cost matching between the two sets of locations). The total cost of the algorithm is the sum (over all rounds) of the assignment cost and the movement cost. The goal is to minimize the regret. The algorithm is said to have $(\alpha, \beta)$ regret, if the total cost is at most $\alpha$ times the cost of the offline static solution plus an additive term of the form $O(\beta \sqrt{T})$ (for $T$ rounds). Online learning with movement cost has been well-studied, and an influential result [11] gives an $(1+\epsilon, \beta)$ regret for any constant $\epsilon > 0$ and $\beta$ depends on size of the metric space. This result uses connections with the metrical task system problem. The running time of this algorithm (in each round) is of the form $O(n^k)$, and so is exponential in $k$. The paper asks if one can give efficient algorithms in this setting. The main result of the paper shows that for HSTs (hierarchically separated trees), one can get constant-regret efficient algorithms. Then, using standard ideas, one can get an $O(\log n, \beta)$-regret (where $\beta$ depends on the metric space only). For HST's, the algorithm is obtained in a two phase manner (again, this is standard in the case of online algorithms): frst maintain a fractional solution to this problem, and then maintain a randomized integral solution online (whose marginals are given by the fractional solution). For maintaining the fractional solution, they use the well-known FTRL approach, though the regularizer is non-standard (but was known in prior-work) and requires lots of details. The second part also requires non-trivial analysis, though again some of the ideas seem to be related to [9]. Strengths: 1. The online learning framework for clustering is an interesting problem, and this is the first non-trivial result for this problem. 2. The analysis, though along standard lines, requires lot of details and so is non-trivial. Weaknesses: 1. As mentioned earlier, most of the ideas follow along standard lines. 2. The actual regret guarantee is somewhat weak: one would expect constant-regret for general metric spaces. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Can we get better results for tree metrics? As stated the reliance on HST's seems crucial. 2. What are the new ideas in fractional to integral conversion? At a very high level, it seems very related to the result in [9], though the authors do point out that [9] does not consider assignment costs, and only movement costs. Still, it was not clear, if any new ideas are needed here. 3. The running time is mentioned to polynomial in $n$, but can it be polynomial in $T$ also? For example, is it conceivable that when $T$ gets very large, the running time can also grow with it? I was a bit confused because if we are looking at the aggreagate time after $T$ rounds, then oen can potentially spend time polynomial in $T$. Are we assuming that $T$ is polynomially bounded? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: No negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer of their work and the insightful feedback. In the revised version of our work we will incorporate the following discussion addressing your comments. *Weaknesses* 1. *As mentioned earlier, most of the ideas follow along standard lines.*\ The cornerstone of our approach was to realize and establish that $O(1)$ regret is plausible for HST metrics. The main technical contribution of this work is (i) coming up with the dilated entropic regularizer to capture the structure of the HST and combine it with the FTRL algorithm and (ii) designing a lossless (up to constant factors) rounding scheme that simultaneously works for both the connection and the moving cost. Both of these components where central towards acquiring constant regret on HSTs. 2. *The actual regret guarantee is somewhat weak: one would expect constant-regret for general metric spaces.*\ While achieving constant regret for general metric spaces could be theoretically possible, it is unclear whether it can be actually achieved via polynomial-time algorithms. We remark that constructing lower bounds for time-efficient online learning algorithms is inherently difficult, as lower bounds on online learning algorithms are solely based on the lack of knowledge of the future data. Furthermore, achieving constant regret on HSTs (and thus $\mathcal{O}(\log n)$-regret on general metric spaces) already required several new ideas, as discussed above. Finally, we comment that constant-regret guarantees are not known even in the simpler setting of $\gamma=0$ (no moving-costs) considered by Fotakis et. al. [30]. *Questions* 1. Our algorithm is based on the HST structure (tree $+$ exponential decay in the weights). Going beyond our $\mathcal{O}(\log n)$ regret guarantees for general tree metrics is a very interesting direction that however requires new ideas and techniques. $$ $$ 2. The key idea for bounding the connection cost lies in Lemma 5. More precisely, *Cut&Round* produces a probability distribution over facility placements such that for any node $u$ in the HST, * $Y_v = \lfloor y_v \rfloor$ with probability $1 - \delta(y_v) $ * $Y_v = \lfloor y_v \rfloor$ + 1 with probability $\delta(y_v) $\ with $Y_v$ being the number of facilities in the descendant leaves of $u$. $$ $$ In Appendix E3 (at which the respective bound on the connection cost is presented) one can clearly see the importance of the above property - a probability distribution *marginally-respecting* the fractional solution cannot guarantee bounded connection cost. \ $$ $$ Beyond the connection cost, a crucial merit of *Cut&Round* over the rounding of [9] concerns the computational complexity. Specifically, it is not known whether the rounding of [9] can be implemented in polynomial time. The rounding of [9] maintains a prob. distr. $\mu_t$ (over facility placements) that marginally respects the underlying fractional placement $x_t$. At round $t+1$, $\mu_{t+1}$ is selected such that $$Earth-mover-distance(\mu_t,\mu_{t+1}) \leq \mathcal{O}(d(x_t,x_{t+1})).$$ \ In order to guarantee the latter, the rounding of [9] iterates over the facility placements on the positive support of $\mu_t$ that can be exponentially large.\ \ On the other hand, *Cut&Round* runs in linear time wrt to the size of HST. Notice that *Cut&Round* guarantees bounded moving cost through the use of *shared randomness* across all rounds (see $\alpha_v$ in Algorithm $1$). $$ $$ 3. Notice that since new clients arrive at each round $t \in [T]$, a linear dependence in $T$ is inevitable. Thus $T$ is essentially considered to be polynomially bounded. Our algorithm admits linear dependence $T$ and polynomial in the rest of the parameters. In the revised version of our work we will add a formal definition of polynomial-time online learning algorithms so as to avoid confusion; specifically, we are interested in algorithms whose time complexity is polynomial to the size of the input and the horizon $T$.
Summary: This paper studies online clustering with moving costs problem. In the problem, the client sets change over time. In each round t, the algorithm has to place k facilities F_t before seeing the set R_t of clients. Two costs will be involved in each round t: the connection costs and the moving costs. The connection cost is the k-median cost of the solution F_t, with R_t being the client set. The moving cost is gamma times the earth-mover distance between F_{t-1} and F_t. The cost is compared to the cost of the best static solution. The main result of the paper is an algorithm that achieves O(log n)-regret and beta \sqrt{T} additive error, where beta = O(kn^{3/2} * D_G * max{gamma, 1}. To achieve this goal, the authors reduced the general metric to a HST metric, by losing an O(log n) factor in the regret. For the HST metric, the authors first solved a fractional version of the problem, and then use an oblivious randomized rounding algorithm to round the fractional solutions to integral ones. The rounding algorithm guarantees that in expectation, the moving cost incurred by the integral solutions is at most that of the fractional solutions. To solve the fractional version of the problem, the authors used a dilated entropic regularizer. For every non-leaf u in the tree, they consider the fractional centers in each child-tree of u. After scaling this gives a distribution. They consider the entropy of this distribution. The dilated entropic regularizer is the weighted sum of the entropies over all non-leaves u, where the weight for a vertex u is 2^{lev(u)}; that is, the distance from u to any of its descendent leaf. With this regularizer defined, the algorithm in every round chooses the fractional solution that minimizes the retrospective connection cost plus the scaled regularizer. The authors then show that the algorithm can achieve a O(k n^{3/2} D \gamma) * \sqrt{T} additive regret. Strengths: -This is the first result that considers moving cost in the online learning model. The model introduced is interesting, and the paper may initiate the study of the model for other problems. Previous results only consider the connection cost. -The paper achieves an O(log n)-multiplicative regret guarantee, improving the previous O(k) factor. Weaknesses: The parameter beta is too big. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: No questions. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: No Limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer of its work and the insightful feedback. In the revised version of our work we will incorporate the following discussion addressing your comments. *The parameter beta is too big:* (Similar to Reviewer wiCY) The additive regret term $\beta = O(k * n^{3/2} * D * \gamma * \sqrt{T})$ is typically only required to vanish for large values of $T$, and this is indeed a standard convention that is considered in the online learning literature. We also comment that in the previous work of Fotakis et al. [30], their algorithm for the $\gamma=0$ setting admits an additive regret of $\mathcal{O}(k * D * n * \sqrt{\log nT})$ that matches our additive regret up to a $\sqrt{n}$ factor.
Summary: This paper considers a regret framework for the following online learning problem. As input we are given a weighted graph and a number k of facilities. At each time step, we must select k vertices of the graph to serve as facilities. Afterwards, we learn the clients (also vertices of the graph) that the facilities must serve. The cost of the time step is the connection cost of the clients to the facilities, plus the cost of moving the facilities from the old positions to the new positions. The goal is for the total cost over all rounds to be small compared to the best solution in hindsight, that is, a static set of k facilities that minimizes the connection costs over all rounds. While the multiplicative weights algorithm can be shoehorned to give a (1+\eps)-regret algorithm, it runs in time O(n^k), hence not polynomial. An O(k)-regret polytime algorithm is known for when there are no moving costs. This paper presents the first polytime algorithm with regret guarantees that accounts for moving costs; it achieves O(log n)-regret (suppressing additive terms). A key tool in the paper is to first develop an O(1)-regret algorithm for the special case where the input graph is a type of weighted tree called a Hierarchical Separation Tree (HST). Then for the general case, an O(log n)-distortion metric embedding is used to reduce the general graph case to the tree case. To achieve an O(1)-regret algorithm for HSTs, there are two phases. The first phase is to obtain an O(1)-regret algorithm for a fractional relaxation of the problem, in which facilities can be placed fractionally on vertices. The structure of the HST is crucial in both the formulation of the fractional relaxation and in the algorithm used to solve it (which is the well-known Follow the Regularized Leader algorithm, with the choice of regularizer exploiting the structure of the HST). The second phase is to round the fractional solutions to integral solutions, through a novel rounding algorithm the authors call Cut and Round, which preserves the connection and moving costs of the fractional solutions up to constant factors. Here as well, the structure of the HST is crucial. Strengths: - This paper gives the first polytime algorithm with regret guarantees for k-clustering with moving costs. Incorporating moving costs gives rise to a more expressive and realistic model than those considered previously. - This paper solves the problem (up to constant factors) for a certain class of trees, which is interesting in its own right. - While many of the techniques used (follow the regularized leader, fractional relaxations, and randomized rounding) are somewhat standard, reducing the problem to trees is a key insight that enables the use of these techniques. - The paper is clear and easy to follow. Weaknesses: - Without lower bounds, it is difficult to interpret the regret factor. - A more detailed comparison to the ideas used in the work of Fotakis et. al. would help contextualize the present work and highlight which ideas are novel. For example, the work of Fotakis et. al. also uses a fractional relaxation. What are the obstacles to extending their work, and in what ways is the present approach similar to theirs? - While the authors emphasize that the tree structure is exploited in many parts of the algorithm, they do not provide any intuition or motivation for why this case is in some sense easier. - The paper appears to violate the formatting instructions in terms of font size. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - How crucial is the matching-based formulation of moving costs to the proof? Have you considered other natural ways of defining moving costs and if so, what obstacles arise? - Can the results be easily adapted to when the connection cost is based on, say, \ell_p objectives? - Does the problem become easier for \gamma = 1? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Limitations are acknowledged. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer of all its work and the insightful feedback on our paper. In the revised version of our paper we will incorporate the following discussion, addressing the reviewer's comments. 1. *Without lower bounds...the regret factor:* Our work indeed does not provide lower bounds, however we remark that this is an inherent difficulty once considering time-efficient online learning algorithms. Lower bounds on online learning algorithms are solely based on the lack of knowledge of the future data. Deriving improved lower bounds on the regret in case of time-efficient algorithms is a very important and intriguing research direction however we are not aware of any technique in the literature. $$ $$ 2. *A more detailed...approach similar to theirs?:* $$ $$ Despite both our work and Fotakis et al. use the *relax-and-round* pipeline, there are key differences in the two approaches. Namely, the algorithm of Fotakis et al. uses the *original* metric space while our algorithm uses the HST metric space approximating the original metric space.\ $$ $$ The rounding of Fotakis et al. is based on the so-called filtering technique [1]. More precisely, in order to bound the connection by its fractional counterpart, Fotakis et al. produce a facility-placement such that *the connection cost of any node on the metric (with or without client) is at most $\mathcal{O}(k)$ times its fractional counterpart*. However, in order to guarantee the latter property, their produced integral solution can be arbitrarily far away from the respective fractional one. This is the reason why their approach is a dead-end once accounting moving cost - two consecutive fractional solutions can be very far away from the respective fractional ones.\ $$ $$ On the other hand, our algorithm exploits the easier HST structure to simultaneously treat both connection and moving cost. Our rounding scheme (*Cut&Round*) produces a probability distribution over facility placements, the marginal of which coincide with the fractional facility placements (in fact it guarantees a even stronger property, see Lemma 5 and our response to Reviewer d1Rn). By exploiting the HST structure we establish that the expected cost of each client equals its factional connection cost. At the same time *Cut&Round* creates correlations among the probability distributions of different rounds through the *shared randomness* $\alpha_v$ (see Step $2$ of Algorithm $1$). Again, due to the HST structure we able to approximately upper bound the expected moving cost by the overall fractional moving cost. $$ $$ [1] Approximation algorithms for geometric median problems, Lin et al. 1992 $$ $$ 3. *While the authors emphasize...sense easier:* $$ $$ The key reason that HST structure is important is the following: Consider the prob. distribution $x,y$ over the nodes of metric space $G(V,E,w)$. In order to compute the distance of the *optimal transport* from $x $ to $y$ one would need to solve a min-cost flow problem. However in case $G(V,E,w)$ is an HST, then the optimal distance equals $\sum_{v \in V} 2^{\mathrm{Level}(v)} |x_v - y_v|$. The latter closed-form formula provides with *easy to handle description* of both the connection and the moving cost - we refer the reviewer to Definition 7 and 8 for the respective formulas of the connection and moving cost. $$ $$ 4. *The paper...terms of font size:* We have used the neurips template. We have also checked with others submissions of ours and we did not find any discrepancy. Could you provide us with further details so that we can correct the formatting for the camera ready version? $$ $$ 5. *How crucial...obstacles arise?:* By considering different metric spaces, matching-based formulations capture all natural ways of defining the moving cost . For example the tweak on the problem at which the moving cost is defined as the set-difference of the facilities placements, $(|F_t/F_{t-1}|+|F_{t-1}/F_{t}|)/2$, can be captured with a matching-based formulation on the \textit{uniform metric}. In this work we have focus on the most \textit{intuitive formulation} at which the connection cost of the clients and the connection cost lie on the same metric space. The case of two different metric spaces (one for clients and one for facilities) is a very interesting research direction that however remains outside the scope of this work. $$ $$ 6. *Can the results be...$\ell_p$ objectives?:* Extending our results to general $\ell_p$ metrics is a very interesting direction. We believe that our ideas can be extended to cover general $\ell_p$ metrics however this question remained outside the scope of this work. $$ $$ 7. *Does the problem become easier for $\gamma = 1$?:* Apart from the obvious fact that $\gamma \geq 1$ captures the case $\gamma = 1$, we do not think that the problem becomes easier in this specific case. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response. As a result, I have decided to raise my score. A few comments: > Despite both our work and Fotakis et al. use the relax-and-round pipeline, there are key differences in the two approaches... I would suggest adding a few lines about the differences you mention here to the paper. > We have used the neurips template... Without seeing the LaTeX code, I do not know, but I would just check that you did not change the font size or the line spacing. I only mention this because the submission looks different in terms of font from other submissions I reviewed. --- Reply to Comment 1.1.1: Comment: Thank you very much for your response and for increasing your score. In the revised version of our paper we will incorporate the discussion comparing the techniques of our work with those of Fotakis et al. Concerning the formatting, we have not changed the font size or the line spacing. Thank you again for all your work!
Summary: The authors study the k-median problem in an online learning setting with incorporated moving costs. In this setting the instance is revealed in batches, and before each batch the algorithm is required to place k centers so that to minimize the assignment costs of the (unseen) points that are then revealed in the batch. The objective function incorporates a cost for each center-placement that is moved between consecutive batches. The cost of the algorithm is the sum of the cost incurred in all batches. The performance of the algorithm is then measured w.r.t. to the best possible *fixed* placement of k-center, and is called the regret of the algorithm. The authors present an algorithm that has regret of a O(log n) multiplicative factor, and a O(Poly(k, n, Diameter, \gamma)*\sqrt(T)) additive term, where n is the number of all possible points that are revealed, and T the number of batches. The Poly(k, n, Diameter, \gamma) term in the additive factor becomes insignificant when T grows to infinity. In their experiments, the authors compared their algorithm to the previous state-of-the-art algorithm by [Fotakis et al.], which does not consider the moving cost in the objective function. The authors report that their proposed algorithm outperformed [Fotakis et al.] on instances that were adversarial for the competing algorithm. Then, the authors also applied their algorithm to a couple of instances of images, where the images arrived in a random order. They compared the results to the optimum fractional solution on the whole instance. In this second set of experiments, the proposed algorithm performed very close to the optimum. However, the random order arrival seemed to be the easiest arrival order for the algorithm. This is because the distribution of the images did not shift over the batches of updates Strengths: + Studies a well-motivated formulation that incorporates the moving costs of the center. This models better the discussed applications, overcoming the shortcomings of previous studies. + Solid theoretical contributions, and well-structured presentation. Weaknesses: - The additive term O(k * n^{3/2} * Diameter * \gamma * \sqrt{T}) stops being the dominant term (and in fact being less than any arbitrary fixed solution) when T >> n, which may be unrealistic in some cases (including the motivating applications provided in the introduction). However, this may be an inherent artifact of the online batch-learning model. - No experiment made it to the main body of the paper (although briefly discussed). - The experiments consider batches of size 1, and only compare Fotakis et al. on instances adversarial to the competing algorithm. In the second experiment on real instances, the random arrival order is very friendly to the online learning framework. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: I find the problem formulation well motivated and natural. The theoretical result and the techniques are quite interesting, and deserve publication. The experimental evaluation is very friendly to the particular setting, and could be further strengthened. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your work and valuable comments. We commit on addressing them in the revised version of our work. 1. *The additive...model:* As mentioned correctly by the reviewer, the additive term typically vanishes for large values of $T$, and this is indeed a standard convention that is considered in the online learning model. We also comment that in the previous work of Fotakis et al. [30] where the version of the problem without moving costs ($\gamma=0$) is introduced, their algorithm admits an additive regret of $\mathcal{O}(k * D * n * \sqrt{\log nT})$ that matches our additive regret up to a $\sqrt{n}$ factor. $$ $$ 2. *No experiment..(although briefly discussed).* Due to space limitations, we had to move the experimental part of our work to the appendix. We plan to use the extra page that is provided in case of acceptance in order to include the experimental evaluation of our algorithm to the main body of our work. $$ $$ 3. *The experiments...learning framework.* $$ $$ In our experimental evaluations we compare against the algorithm of Fotakis et al. since to the best of our knowledge it is the only method for online clustering running in polynomial time. $$ $$ In the attached pdf we provide additional experimental evaluations of our method for sequences at which the clients do not arrive in random-order while admitting batches of size $10$. As our results indicate, neither the batch size nor the random-arrival order seem to affect the algorithm's performance, and we still observe near optimal performance, showing that the $\mathcal{O}(\log n)$-regret guarantee is really pessimistic in real-world data. We will incorporate these additional experimental evaluations in the revised version of the paper. $$ $$ We would also like to comment that one of the reasons that designing adversarial instances for our algorithm is hard, is the inherent difficulty of designing lower bounds once considering time-efficient online learning algorithms. Deriving improved lower bounds on the regret in case of time-efficient algorithms is a very important and intriguing research direction, however we are not aware of any technique in the literature, and it is for this reason that we chose to consider the random arrival order. $$ $$ Finally, we are more than willing to run additional experiments during the discussion period and incorporate them into the camera-ready version of our work. We welcome any suggestion on settings and/or datasets that would be interesting in evaluating our algorithm. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I appreciate the additional experiment with batches of larger size and with non-random arrivals, and understand the difficulty of creating adversarial instances. I'm still a bit uncomfortable with the large additive factor: While I acknowledge that the previous work also exhibits an additive factor, there is no clear evidence that this is necessary (e.g., in the differential private algorithms, it is known to be required). --- Reply to Comment 1.1.1: Comment: Thank you very much for your response and for appreciating the additional experiments we provided! We would like to note that applying the MWU/Hedge algorithm (that requires exponential time and space) even to the easier version of our setting without moving cost ($\gamma = 0$) leads to $O(\mathcal{O}(\sqrt{\log\binom{n}{k} \cdot T}))$ additive regret, that is comparable to ours up to polynomial factors. Specifically, by treating each of the $\binom{n}{k}$ different center configurations as a separate expert, we can view the $\gamma=0$ version of our problem as an instantiation of the “Learning from Experts Advice” problem and directly apply the MWU algorithm. Doing so would give us a $1$-multiplicative regret guarantee with an additive regret of $\mathcal{O}(\sqrt{\log N\cdot T})$. The latter indicates that some additive regret with a polynomial dependency on $n,D$ and $k$ and sub-linear to $T$, might be unavoidable in our setting. Thank you again for all your work!
Rebuttal 1: Rebuttal: We thank all the reviewers for all their work and insightful comments. In the attached pdf we present a set of supplementary experiments, addressing some of the reviewers' comments. Pdf: /pdf/7a07db3c29092ae2324732852a9f2e77af7f1205.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Robust Second-Order Nonconvex Optimization and Its Application to Low Rank Matrix Sensing
Accept (poster)
Summary: The authors focus on the problem to find approximate second-order stationary points (SOSPs) in the strong contamination model, where they propose an efficient algorithm with an approximate SOSP as an output. The algorithm is proved to have dimension-independent accuracy guarantees. In particular, the proposed algorithm can solve the low rank matrix sensing problem robustly. They first introduce the formulation of generic nonconvex optimization problem and the low rank matrix sensing and the assumptions for the main theorem where the outputs of the algorithm for the corrupted Stochastic optimization problem can achieve approximate SOSP with high probability. For general cases, the algorithm can obtain approximate SOSPs under strong contamination. At last the authors provide the theoretical guarantees for robust low rank matrix sensing problem. Strengths: The paper is well written and clear, the approach is well supported by the theoretical analysis. The application on robust low rank matrix sensing problem seems strong and nice. Weaknesses: - It might be good to include some simulations or real data applications on the robust low rank matrix sensing problem to address its computational efficiency. Or maybe some comparisons to first order method such as projected gradient descent for robust stochastic optimization problem will be better. - The authors present theorem showing the results for general robust nonconvex optimization, like Theorem 1.5 and Theorem 3.1. As the authors say below Theorem 1.5, the assumptions appear restrictive and are satisfied for the matrix sensing. My question is whether the general results can hold for other nonconvex optimization problems such as matrix completion, phase retrieval, etc. Or what assumptions might be violated in these cases? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weakness. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I have not found the part explicit addressing the limitations of the algorithm. Maybe authors can elaborate more on these. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: About Weakness 1: We addressed simulations and experiments in the global response. To the best of our knowledge, the only first-order method that can robustly find approximate second-order stationary points is [Yin+19]; note that projected gradient descent can only find approximate first-order stationary points, which are not sufficient for a number of nonconvex optimization problems (see Lines 36–57 in Section 1 for the detailed reason). However, the error guarantee in [Yin+19] scales with the dimension, which is very unfavorable and usually uninformative for high-dimensional problems. About Weakness 2: We addressed why the assumption that the iterates stay in a bounded region $\mathcal{B}$ is fairly general in the global response. For example, none of the mentioned nonconvex optimization problems violate those assumptions of Theorem 1.5. Regarding extensions to other nonconvex problems: - For matrix completion, see the global response as to why it may not be interesting in the strong contamination model. - For phase retrieval, our techniques should apply and this is an exciting avenue for future work. About Limitations: As a theory submission to NeurIPS, we stated all assumptions in a precise way. Our algorithms will succeed under those assumptions. As the “Limitations” section is not compulsory in NeurIPS 2023, we did not go into depth to discuss them because all our assumptions are fairly standard. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their reply. My assessment remains inclined to the positive.
Summary: This paper considers the problem of finding a second-order stationary point in corrupted settings. In particular, it considers the adversarial settings, where a fraction of the observations are arbitrarily corrupted after they are observed. In this setting, under certain assumptions on the cost functions, the authors give an algorithm that yields a second-order stationary point with $n=O(D^2/\epsilon)$ samples. This result is applied to problem of robust matrix sensing. Strengths: - This work yields the strongest guarantees of finding a second-order stationary point in this setting. - This is the first work to use both robust mean and robust Hessian approximation to solve the second-order approximation problem. - The work provides a statistical query lower bounds for rank one matrix sensing that shows that exponentially many queries would be needed to go beyond the $O(d^2)$ bound. Weaknesses: - One of the assumptions - that the iterates of the algorithms stay in a bounded region $\mathcal B$, is hard to check or guarantee in general. - The work is a combination of two past works: the robust mean estimation algorithm of DKP20 and an anonymous work Aut23. It is not clear what new theoretical techniques are employed, or if this work is just a combination of the previous. - The statistical query result also follows the theorems in past works. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - I wonder when one would want to apply such a method over first-order methods. In particular, this method requires computing estimates of the Hessian, which is quite expensive in high-dimensional settings. Can the authors motivate a bit more why the second-order stationarity is important, for example, in the application they discuss (low-rank matrix sensing)? Experiments on some real data settings could show why one might be interested in second order stationarity. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: - While an interesting and complete work, I am left unsure of what is new in this work, or if this is just applying already developed techniques to a new setting (finding a second-order stationary point) -- i.e., just combining the results of DKP20 and Aut23. - No experiments are given showing the practicality of the method, which could help to motivate the usefulness of finding SOSPs. - This is a limitation of the setting, but many more samples are needed than the information-theoretic threshold. - It is not clear why it is useful to find second-order stationary points in general. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: For Weaknesses, we responded to the restrictiveness of the bounded region assumption in the global response. We provided details of our novel theoretical techniques in Section 1.2, starting Line 152. Our main conceptual contribution is to propose a unified framework for designing provably robust learning algorithms by robustly finding second-order stationary points (SOSPs), which is important because for many nonconvex problems, first-order stationarity is not sufficient but second-order stationarity implies global optimality. We then showcase our framework by applying it to solve robust matrix sensing in the strong contamination model with dimension-independent error, even achieving exact recovery when the measurements are noiseless. On a technical level, showing that all iterates stay inside the region using the dissipativity condition is a useful demonstration and technically challenging. On the lower bound side, although the generic techniques for our SQ lower bound construction exist in the literature, it is highly non-trivial to establish the moment-matching construction required to prove our lower bound. Moreover, it is by no means clear that such a lower bound should hold — there exist efficient algorithms that can estimate the largest eigenvalue (and an associated eigenvector) of a real symmetric positive semidefinite matrix with a sample size proportional to the dimension $d$, and one might think estimating the smallest eigenvector of a symmetric matrix is equally simple. Our SQ lower bound rules out this possibility, because otherwise both gradient directions and negative curvature directions can be estimated with $O(d)$ samples, contradicting the lower bound. For Questions, Lines 36–57 in Section 1 discussed why we need approximate second-order stationary points. In summary, for a number of nonconvex problems, second-order stationarity implies global optimality, while first-order stationarity is known to be lacking [CG18]. In particular, this statement holds for low-rank matrix sensing problems. For Limitations about the large sample size, while there may be a gap between the sample size required by our algorithm and the information-theoretic lower bound of the outlier robust low-rank matrix sensing problem, we provided an SQ lower bound that justifies such a gap and shows that our dependence on the matrix dimension $d$ is tight. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response to my questions. I think the lack of practical implementation is still a major weakness, but I am inclined towards a more positive score and will raise it to 6.
Summary: In this work, the authors proposed a new algorithm to find approximate second-order stationary points for stochastic optimization problems under the strong contamination model. The general algorithm is applied to the robust matrix sensing problem and the convergence results are proved for the robust matrix sensing problem. Strengths: The results in this work are novel and should be interesting to audiences in optimization and machine learning fields. Weaknesses: It is unclear how the results of this work differ with those in literature; see my comment (10) in the next section. The presentation of the results and the sketch of proofs can be improved. Currently, many important technical details are omitted in the main manuscript. For example, Algorithms A.1-2 and the construction of distributions for the SQ lower bound. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: (1) Line 48: "newdiscussed" is a typo. (2) Line 69: it would be better to (briefly) discuss the counterexample in the appendix. (3) Line 85: it seems that both "second-order stationary point" and the abbreviation "SOSP" are used throughout the paper. It would be better to be consistent in using the abbreviation. (4) Line 93: I think B_D_g and B_D_H should be the bound on the norm of the gradient and the Hessian matrix, respectively? (5) Theorem 1.5: maybe the authors can briefly discuss the reason why the sample complexity is inversely proportional to the corruption rate \epsilon? (6) Line 148: it may be better to use D to denote the dimension. (7) Line 157: I wonder if the region is the same as the region B in Assumption 1.4? In addition, is the information about region B provided to the algorithm as an input parameter? It will be helpful if the authors can clarify it in the paper. (8) It seems that the statements on Lines 163 and 166 are the same. (9) Line 169: it would be better if the authors can be more specific on the circular dependence. Does it refer to the rotational invariance of the objective function? (10) Line 176: I think the results in the following paper also concerns the noiseless and Gaussian measurement case. It will be better if the results in the following paper can be discussed and compared. Li, Xiao, Zhihui Zhu, Anthony Man-Cho So, and Rene Vidal. "Nonconvex robust low-rank matrix recovery." SIAM Journal on Optimization 30, no. 1 (2020): 660-686. (11) In Equation (1), the computational complexity is proportional to \epsilon_g^2. This is a little counter-intuitive, since the upper bound of the running time is not changed is we shrink \epsilon_g and \epsilon_H together (at different rates). It would be better if the authors can include an explanation to this relation. (12) Line 231: "exists" (13) Theorem 3.5: It would be better to mention the condition \epsilon = O(1/k^2/r^2). (14) Line 320: "Theorem 3.1 applies" (15) Line 367: it would be better to explain why the results are more significant or important for the rank-1 case. (16) Line 378: it will be helpful if the authors can briefly explain how to generate the SQ oracle. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: See my comments in the previous section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for pointing out the typos in Questions (1)(6)(12)(14) and suggestions for our presentation Questions (3)(13). Question (15) was addressed in the global response. As for the weakness about our presentation, we provided a sketch of the proof in the main body and deferred algorithms introduced in other papers and technical details to the appendix due to page limitations. About Question (10) and the main weakness about our novelty: Thank you for pointing out this improvement over [Li+20] and we will add it to the references. This paper, as well as the cited [Li+20], discussed the setting where outliers only exist in the measurements $y_i$. Our paper, in contrast, considers the more challenging scenario where both sensing matrices $A_i$ and measurements $y_i$ can be corrupted. Since the sensing matrices are high-dimensional objects, allowing their corruptions results in a much more difficult problem: while both [Li+20] and [Li+20a] can solve their problem with $\widetilde O(d r)$ samples, our setup requires $\widetilde \Omega (d^2)$ samples as evidenced by our SQ lower bound. [Li+20a] Xiao Li, Zhihui Zhu, Anthony Man-Cho So, and Rene Vidal. "Nonconvex robust low-rank matrix recovery." SIAM Journal on Optimization 30, no. 1 (2020): 660-686. About Question (5): See the global response for the derivation of the sample complexity. Our algorithm uses robust mean estimation discussed in the global response as a subroutine, which is why we have the same dependence on $\epsilon$. About Question (7): Yes, the region in Theorem 1.5 is the same as in Assumption 1.4. Our algorithm does not explicitly check whether the iterates move outside the region. This is a fairly standard assumption in robust nonconvex optimization (see the global response where we argue this assumption is not restrictive). In the application to low-rank matrix sensing, we proved that this assumption is indeed satisfied. Note that information about region $\mathcal{B}$, though not explicitly required and checked, might be implicitly passed to the algorithm because it can be related to the Lipschitz constant of the gradient or Hessian of the objective function. About Questions (8) & (9): The circular dependency in Line 169 refers to the following dependencies of three quantities: the gradient inexactness estimate depends on the covariance bound of gradients (which depends on the covariance bound of the gradients, see Proposition 2.3), the distance between current solution and global optima (which depends on the gradient inexactness, see Equation (32)), and the covariance bound of the gradients (which depends on the distance to global optima, see Equation (17) when we computed the covariance of the gradients). We intended to use this paragraph to sketch and help readers understand the proof of Theorem 3.8 and its difficulties. The repetition of similar wordings in Lines 163-166 happens when we are trying to sketch how we overcame such circular dependence. About Question (2): There is a counterexample from our low-rank matrix sensing application where the adversary can trick the algorithm into believing that a saddle point with dimension-dependent negative curvature appears to be a local minimum solution. We will include this counterexample in the appendix. About Question (4): When we discuss the general robust nonconvex optimization result, we consider the gradient to be a vector and Hessian to be a matrix. When we apply this result to robust low-rank matrix sensing, we vectorize the matrix variable $U$ when we invoke Algorithm A.1 to fit into the general framework. About Question (11): Equation (1) consists of two terms. The first term is the expected computational complexity that scales with $\max(\epsilon_g^{-2}, \epsilon_H^{-3})$, which is standard for vanilla gradient descent algorithms with negative curvature detection. The second term is the correction when we extend the expected computational complexity to the high probability bound, which involves some more complicated interplay between the gradient step ($\epsilon_g$ dependence) and negative curvature steps ($\epsilon_H$ dependence). If we shrink $\epsilon_g$ and $\epsilon_H$ together, the first term will increase. About Question (16): A statistical query with accuracy \tau can be implemented with error probability $\delta$ by taking $O(\log(1/\delta)/\tau^2)$ samples and evaluating the empirical mean of the query function $q(\cdot)$ evaluated at those samples. It is usually possible to reuse samples across different queries. See the paragraph “SQ Algorithms versus Traditional Algorithms” in [DK23, Chapter 8.2.1] for a more detailed discussion. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their detailed response! I will remain netural and lean towards accept. So I will keep my score. --- Rebuttal 2: Title: Response to Authors' Rebuttal? Comment: Dear Reviewer emkc, Thanks for your hard work in reviewing this paper. As the author-reviewer discussion period will end on Monday, August 21, could you kindly take a look at the authors' responses to your comments and indicate whether you are satisfied with them? Your timely feedback not only will contribute greatly to the decision process but will also be greatly appreciated by the authors and program team. Best, Your AC
Summary: This paper studies the problem of finding approximate second-order stationary point when a constant fraction of datapoints are corrupted by outliers. It proposes an algorithm with provable guarantees which matches the statistical query lower bound established in the paper. The general result is applied to study low-rank matrix sensing with outliers. Strengths: This paper first proposes a general result in finding approximate SOSP and then applies it to study the widely applicable problem of low-rank matrix sensing. The newly proposed algorithm is proved to be able to find an approximate SOSP in polynomial time even when a constant proportion of samples are corrupted. It also provides a lower bound on the sample complexity for rank-one matrix sensing which matches the required sample size of the algorithm, confirming the efficacy of the algorithm. Weaknesses: 1. The results obtained in Theorem 1.7 and 3.3 suggest that increasing the sample size $n$ (while fixing the noise level $\sigma$ and outlier fraction $\epsilon$) does not enhance the algorithm's performance, as the estimation error does not depend on the sample size. This observation seems counterintuitive and calls for additional clarification in order to better understand this phenomenon. 2. In Section 4, the paper presents a lower bound for low-rank matrix sensing specifically for the case where the rank is one (i.e., $r=1$). The statement "Our main result in this section is a near-optimal SQ lower bound for robust low-rank matrix sensing that applies even for rank $r=1$" suggests that the rank-one scenario poses additional challenges than the general low-rank case, which doesn't sound reasonable. Further clarification is needed to better understand this. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. This paper considers symmetric case. What if the matrix $M^{\star}$ is asymmetric? 2. A closely related problem is matrix completion. Is it possible to extend the analysis to matrix completion? 3. It is assumed that the algorithm requires knowledge of a multiplicative upper bound $\Gamma$ of $\sigma_1^{\star}$. How can this parameter be estimated in practice? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: This paper does not have potential negative social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Weaknesses 1 & 2 and Question 2 were addressed in the global response. For Question 1, we intended to present our robust low-rank symmetric matrix sensing algorithm as a simple application of our general robust nonconvex optimization framework. The techniques are generalizable to asymmetric ground truth matrix $M^*$: [GJZ17, Section 5] discussed how to reduce asymmetric $M^*$ to the symmetric case, i.e. when $M = U V^\top$ for some tall matrices $U$ and $V$, we have $[UU^\top, M; M^\top, VV^\top] = [U; V] [U; V]^\top$ (which is symmetric) and one can add an additional regularizer in the objective function so that $UU^\top \approx VV^\top$. We leave the details to future work. For Questions 3 & 4, knowing an upper bound of the norm of the ground-truth matrix is a standard assumption in matrix sensing, even for non-robust settings. See, e.g., [GJZ17] and [Jin+17]. Its estimation is an interesting question but beyond the scope of our paper. One potential technique is to double the upper bound until a solution is found; the estimation procedure can certify a solution by robustly estimating the value of the objective function and checking whether the error bound in Theorem 1.7 is satisfied. We will clarify this. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification! I will increase the score accordingly.
Rebuttal 1: Rebuttal: We thank the reviewers for their careful consideration of our work and the positive feedback. Below we address some common concerns raised by the reviewers. We hope that the provided clarifications will help clear possible misunderstandings and elevate the reviewer’s assessment of our contributions. **On the restrictiveness of the assumption that all iterates stay inside a bounded region $\mathcal{B}$ in Theorem 1.5**: As demonstrated in our low-rank matrix sensing example, this assumption is easy to satisfy if the objective function satisfies a dissipativity condition. The dissipativity condition says that the gradient aligns with the iterate when its norm exceeds some threshold; this threshold determines the radius of $\mathcal{B}$, which is allowed to depend polynomially on the dimension (see Line 97). Dissipativity is a fairly general phenomenon [Hal10]. See also [RRT17, Section 4] for a discussion of how adding a $\ell_2$-regularization term enables a Lipschitz function to satisfy the dissipativity condition. [RRT17] Maxim Raginsky, Alexander Rakhlin, and Matus Telgarsky. "Non-convex learning via stochastic gradient Langevin dynamics: a nonasymptotic analysis." In Conference on Learning Theory, pp. 1674-1703. PMLR, 2017. **On the choice of rank r = 1 in the construction of SQ lower bound**: The case where the rank $r = 1$ is in fact the easiest parameter regime, hence leading to a stronger lower bound. Recall that the sample complexity of our algorithm is $\widetilde O(d^2 r^2)$ as in Theorems 3.2 and 3.3, and the main point of our SQ lower bound shows that the $d^2$ factor is necessary *even if* $r = 1$. **On the sample size in Theorems 1.7 and 3.3**: The error rate under the strong contamination model (Definition 1.1) usually takes the form of $\widetilde O(f(\epsilon) + g(d/n))$, where $f$ and $g$ are some nondecreasing functions, $n$ is the number of samples, and $d$ is the dimension of the samples. Even with infinite samples, the contribution of the error rate from $\widetilde O(f(\epsilon))$ does not go to 0. For example, any robust estimator for the mean of a $d$-dimensional identity covariance Gaussian must incur $\ell_2$-error $\Omega{\epsilon}$ in the strong contamination model [DK23, Chapter 1.2]. We choose the sample size $n$ so that the contribution from $\widetilde O(g(d/n))$ is comparable to $\widetilde O(f(\epsilon)$, as is the standard practice in algorithmic robust statistics. For the robust mean estimation subroutine used in our paper, Proposition 1.5 in [DKP20] showed that $\epsilon$-corruption of $d$-dimensional samples from a distribution with covariance bounded by $\sigma^2 Identity$ gives error rate of $\widetilde O(\sqrt{d\sigma/n} + \sqrt{\sigma \epsilon})$. Matching the contribution from the first term with the second term gives the sample size $n = \widetilde O(d / \epsilon)$. **On the robust matrix completion problem**: The general robust nonconvex optimization framework and the techniques developed in this paper extend to the robust matrix completion problem (where both values and locations of a fraction of observed entries are corrupted). However, under our problem setup and with our current results, matrix completion where the target matrix has a total of $d^2$ entries requires a sample size bound of $O(d^2)$ (which is potentially optimal if our lower bound for matrix sensing extends to matrix completion). This does not seem to be a particularly interesting result, because this sample complexity allows the algorithm to inspect all entries multiple times. For some other contamination models (different from Definition 1.1), only $\widetilde O(d \operatorname{poly}(r))$ samples are needed (see, e.g., [CG18] and [CGJ17]). Those would be more reasonable sample sizes but their settings are drastically different from ours. [CGJ17] Yeshwanth Cherapanamjeri, Kartik Gupta, and Prateek Jain. "Nearly optimal robust matrix completion." In International Conference on Machine Learning, pp. 797-805. PMLR, 2017. **On lacking simulations and experiments**: Our work is a learning and optimization theory submission that is well within the scope of NeurIPS (based on the Call for Papers). The sample/computational efficiency and error guarantees of our methods are analyzed in a precise way. We believe that our technical results are theoretically interesting and request that our work be judged based on its merits. That said, we acknowledge that experimental evaluation is important and a fruitful direction for future work.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Setting the Trap: Capturing and Defeating Backdoors in Pretrained Language Models through Honeypots
Accept (poster)
Summary: This paper proposes a new active defense framework for PLMs against backdoors attacks. The proposed defense inserts a honeypot module into the original PLM such that the backdoor information is absorbed only by the honeypot module and does not impact the main task module. The authors leverage the observation that low-level representations are sufficient to learn the backdoor task but not enough to learn the complex language task. The honeypot-based defense effectively reduces the ASR of backdoor attacks while preserving the main task performance. Strengths: This paper has the following strengths: + The idea of using an additional honeypot module within the PLM to absorb the backdoor information is interesting and new. + The authors provide a clear discussion about the key observation (i.e., low-level representation is sufficient to learn the backdoor task). + The authors design GCE loss and a new weighted loss to train the honeypot and task classifier with the goal of enforcing the task classifier to focus on clean samples only and the honeypot module to focus on poisoned samples. Weaknesses: This paper has the following weaknesses: 1) Some statements are not clearly justified/explained. For example, Section 5.2 mentions that the proposed defense has better performance on large models according to Table 1. It would be great to provide an analysis/hypothesis about this observation. -2) The design of the weighted CE loss (Equations 3 and 4) is not justified by experiments. Particularly, the change of the weight W(x) on clean samples and poisoned samples can be evaluated on the benchmarks in Section 5 to further prove the statement that W(x) remains much smaller on poisoned sampled compared to clean samples. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q1. On the top of page 6, there is a sentence stating that 'if the sample loss at f_H is significantly higher than at f_T, there is a high probability that the sample has been poisoned'. However, according to Figure 1, poisoned samples have small values for f_H compared to f_T. Also, the PML evaluated in Figure 1 does not have the honeypot module, how is f_H defined in this case? Q2. What is the capacity of the honeypot module? Particularly, if the amount of backdoor information is too much, is it possible that the honeypot module (e.g., with few layers) cannot fully absorb the backdoor? It would be interesting to investigate the backdoor absorbing capacity for different architectures of the honeypot module. Q3. What is the possible reason that the proposed defense has better performance on large models (Table 1)? Q4. Section 5.3 proposes adaptive attacks based on [41]. However, there is no introduction to the work in [41]. Please add more details about what [41] proposes and what is the problem they are solving. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please consider addressing the questions and weaknesses above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**: Section 5.2 mentions that the proposed defense has better performance on large models, according to Table 1. It would be great to provide an analysis/hypothesis about this observation. What is the possible reason that the proposed defense has better performance on large models (Table 1)? **R1**: Thank you for this insightful question! We also noticed this during our experiments. One explanation is that in larger PLMs, low-level features like words, phrases, and syntax are more distinct and primarily located in the lower layers, while high-level semantic features are captured in the upper layers. This clearer separation in larger PLMs possibly aids our honeypot module in detecting backdoor samples more effectively. We're still in the early stages of understanding this fully, and we're excited to investigate this further in our upcoming research. --- **Q2**: The design of the weighted CE loss (Equations 3 and 4) is not justified by experiments. Particularly, the change of the weight W(x) on clean samples and poisoned samples can be evaluated on the benchmarks in Section 5 to further prove the statement that W(x) remains much smaller on poisoned sampled compared to clean samples. **R2**: Thank you for emphasizing the importance of visualizing W(x), and we apologize for any confusion our submission might have caused. We indeed understand the importance of visually representing W(x) to support our claims. In fact, **the dynamic changes of W(x) are presented in Figure 11 of the appendix (Section C)**. The results show that W(x) of the poisoned sample is significantly lower than clean samples after warm-up epochs, which underscores the effectiveness of our proposed honeypot module in capturing backdoor samples. It's possible this detail was missed during your review, and we trust this clarifies your concerns. --- **Q3**: On the top of page 6, there is a sentence stating that 'if the sample loss at f_H is significantly higher than at f_T, there is a high probability that the sample has been poisoned'. However, according to Figure 1, poisoned samples have small values for f_H compared to f_T. Also, the PLM evaluated in Figure 1 does not have the honeypot module, how is f_H defined in this case? **R3**: Thank you for pointing out this discrepancy! - You're right, and we apologize for the confusion. The correct statement should be that if the sample loss at f_H is considerably lower than at f_T, it suggests that the sample might be poisoned. - Regarding Figure 1, while the PLM showcased does not include the honeypot module, we utilized a probing classifier with an architecture identical to the honeypot, which was attached to various layers. In this context, the f_H refers to the probing classifier. We will correct this error and provide more details in our revision. --- **Q4**: What is the capacity of the honeypot module? Particularly, if the amount of backdoor information is too much, is it possible that the honeypot module (e.g., with few layers) cannot fully absorb the backdoor? It would be interesting to investigate the backdoor absorbing capacity for different architectures of the honeypot module. **R4**: Thank you for this constructive suggestion! Indeed, we have incorporated hard-to-learn backdoor samples in our adaptive attack tests. Impressively, our proposed method exhibited strong resilience against them. Even when backdoor samples are challenging to learn, they often possess low-level features like words, phrases, or syntax patterns. Our results have demonstrated that our honeypot module can still detect and capture these elements, showcasing its strong capacity. We agree that different architectures of the honeypot module might impact the defense performance and will explore this in our future work. --- **Q5**: Section 5.3 proposes adaptive attacks based on [41]. However, there is no introduction to the work in [41]. Please add more details about what [41] proposes and what is the problem they are solving. **R5**: Thank you for your feedback. - Paper [41] primarily focuses on designing an adaptive attack to defeat backdoor detection techniques. Specifically, the authors proposed a strategy to craft backdoor samples so they are similar to clean samples in the model latent space. This approach challenges and potentially bypasses current backdoor defense approaches. - Inspired by the technique proposed in [41], we implemented three adaptive attacks to our NLP task to minimize the latent space disparity between poisoned and clean samples. These three methods can serve as strong adaptive attacks to evaluate the robustness of our proposed method. Notably, the proposed honeypot achieved good performance. - Due to the page limit, we only provide a brief introduction of the study [41] in our paper. Following your suggestion, we will update section 5.3 and provide a more detailed explanation of the adaptive attack in our manuscript. --- Rebuttal Comment 1.1: Comment: Thanks for the clarification! The rebuttal from the authors has cleared my questions. --- Reply to Comment 1.1.1: Title: Thank You for Your Positive Feedback! Comment: Thank you for your positive feedback! It encourages us a lot.
Summary: The paper presents a new defense against backdoor attacks on pre-trained language models (PLMs). By leveraging the observation that the loss of poisoned samples drops faster in early layers of PLMs compared to clean samples, it dynamically reduces the weight of suspicious samples in fine-tuning. Empirical results show the effectiveness of the proposed defense against 4 attacks. Overall, the work represents yet another defense that exploits the learning dynamics difference of clean and poisoned samples. Strengths: - Leveraging the learning dynamics difference of clean and poisoned samples is an interesting idea. Even though it has been exploited in prior work (e.g., [26]), this work proposes to dynamically adjust the weight of suspicious samples, which seems new. - Empirical results show the effectiveness against both word-based and style-based backdoor attacks. - The paper is well-structured and easy to follow. Weaknesses: - The threat model needs better motivation. It assumes a clean PLM, which is fine-tuned using potentially poisoned data. Typically, the PLM is provided by external parties (e.g., downloaded from the Web) while the fine-tuning dataset is managed by the user. It seems a more practical setting that the PLM contains backdoors while the fine-tuning data is clean. - The proposed defense bears a lot of similarity to existing defenses that also exploit the learning dynamics difference of clean and poisoned samples (e.g., [26] and Li et al. "Anti-Backdoor Learning: Training Clean Models on Poisoned Data", NeurIPS 2021). However, there is no empirical comparison with prior work, which makes it difficult to assess its superiority. - While the work targets PLMs, the proposed method seems agnostic to the underlying models. It is suggested to tailor the method to the unique characteristics of NLP models. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Please clarify the threat model. - Compare the defense with other defenses based on the learning dynamics difference of clean and poisoned samples. - Optimize the defense for NLP models. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**: The threat model needs better motivation. It assumes a clean PLM, which is fine-tuned using potentially poisoned data. Typically, the PLM is provided by external parties (e.g., downloaded from the Web) while the fine-tuning dataset is managed by the user. It seems a more practical setting that the PLM contains backdoors while the fine-tuning data is clean. **R1**: Thank you for these comments! We provide a clarify of why we chose the proposed threat model as follows: - **Our threat model is practical**. Currently, there are many downstream developers who intend to build their NLP models for services by fine-tuning the PLM with their local samples. These developers may be malicious and can implant hidden backdoors in their models. When such models are subsequently deployed on end-user devices, they pose significant security risks. - **Our threat model is classical**. A majority of recent studies on NLP backdoor attacks and defenses also considered using pretrained models like BERT or RoBERTa and carried out attacks and defenses during the fine-tuning phase [1, 2, 3]. We believe our threat model is widely accepted in the field. - **Pretrained backdoors cannot transfer**. The traditional paradigm for NLP models involves first pretraining the model on a raw corpus and then fine-tuning it on supervised downstream tasks. Backdoors are injected into the model by generating backdoor data and label pairs, typically in the second phase of training the NLP model. To the best of our knowledge, no backdoor attack can be reliably transferred to other or downstream tasks. - **Pretrained models are usually backdoor-free**. Pretrained models are both well-used and widely recognized, developed by prestigious organizations. Together with the previous reason that backdoors cannot transfer, it is unlikely for these developers to inject backdoors during the pretraining phase. We will add more discussions in the appendix of our revision to avoid potential misunderstandings. --- **Q2**: The proposed defense bears a lot of similarity to existing defenses that also exploit the learning dynamics difference of clean and poisoned samples (e.g., [26] and Li et al. "Anti-Backdoor Learning: Training Clean Models on Poisoned Data", NeurIPS 2021) However, there is no empirical comparison with prior work, which makes it difficult to assess its superiority **R2**: Thank you for highlighting the Anti-Backdoor Learning (ABL) method. **We added experiments for ABL and showed the results in General Response R1 and found that ABL's performance is disappointing.** For the explanation of the ABL outcome, please refer to the **General Response R2** for more details. --- **Q3**: While the work targets PLMs, the proposed method seems agnostic to the underlying models. It is suggested to tailor the method to the unique characteristics of NLP models. **R3**: Thank you for this insightful comment! - We consider the defense of using PLM since this is one of the most widely adopted developing schemes, whereas its defense is left far behind. - We admit that the method has no special design based on unique characteristics of NLP models, other than the pre-trained scheme. However, this is not a necessary drawback since **our method is more universal and user-friendly**. It can be adapted to a broad range of NLP models even though their structures may be greatly changed in the future. - In fact, our long-term goal is to design a safe training scheme for using pre-trained models in all tasks. Due to the limitation of time and space, we mainly consider NLP tasks in this paper. We also conduct preliminary experiments on CV tasks in our appendix (Section E) and show promising results. We will evaluate its performance in more realms in our future works. --- **Reference** [1] Biru Zhu, et al. "Moderate-fitting as a natural backdoor defender for pre-trained language models. NeurIPS 2022. [2] Qi, Fanchao, et al. "Onion: A simple and effective defense against textual backdoor attacks." EMNLP 2021. [3] Qi, Fanchao, et al. "Mind the style of text! adversarial and backdoor attacks based on text style transfer." EMNLP 2021. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: The rebuttal and additional experiments have partially addressed my questions. However, I remain concerned about the comparison of this work and prior work (e.g., ABL). The authors argue that ABL is ineffective because the learning rate of poisoned samples is slower than clean samples in fine-tuning. A simple tweak would be to filter out poisoned samples whose loss drops slower. --- Reply to Comment 1.1.1: Title: Additional Clarification Regarding ABL Comment: Thank you for this insightful comment! We hereby provide more explanations and results to clarify potential misunderstandings and further alleviate your concerns. - In our previous rebuttal, we intended to explain why ABL has failed in our studied setting. Indeed, **while the loss of poisoned samples declines more slowly on average in the initial stages, this does not reliably differentiate them from benign samples with high precision.** - To further alleviate your concern, **we adapt ABL with your suggested setting to design its variant (dubbed 'ABL+')**. Specifically, during the backdoor isolation phase for ABL backdoor unlearning, we select the top 1% of samples with the highest loss. As seen in Table 1, **ABL+ still fails to effectively mitigate the backdoor, with an ASR exceeding 50%.** - In Table 2, we display the recall and precision of poisoned samples among the top 1% and 5% of samples with the highest loss (setting: sst2 with RoBERTa$_{base}$ and AddWord attack). It's evident that **even though the recall of poisoned samples in the high-loss samples rises in the initial training phases, it is not sufficient to separate poisoned samples with a precision higher than 90%**, which is a threshold commonly needed for the ABL method. Furthermore, this separation is strongly influenced by the number of learning steps, rendering the defense performance unstable. - While alternative methods might exist that can better differentiate between poisoned and clean samples using variations in learning speeds, we'd like to emphasize that **our method, which leverages differences in poison signals within the model structure, can provide a more reliable and stable detection result.** Also, this insight has the potential to augment numerous existing detection methods, enhancing their performance. **Table 1: The comparison to ABL and ABL+** | Model$\downarrow$ | Dataset$\downarrow$ | Attack$\downarrow$, Defense$\rightarrow$ | ABL (ACC/ASR) | ABL+ (ACC/ASR) | Honeypot (ACC/ASR) | |:-----:|:-------:|:------:|:-----:|:---:|:---:| |RoBERTa$_{base}$|SST-2|AddWord|90.25/76.21 | 91.14/78.72|93.71/6.65| |||AddSent| 91.17/69.24 | 90.52/83.00 |92.39/7.71| ||IMDB|AddWord| 92.59/87.14 | 93.17/95.49 | 93.72/5.60| |||AddSent| 89.75/88.77| 91.43/96.57| 92.72/6.56| |RoBERTa$_{large}$|SST-2|AddWord| 92.03/74.98| 91.55/82.11|94.15/5.84| |||AddSent|91.77/67.05 |90.47/55.74 | 94.83/4.20 | ||IMDB|AddWord| 92.59/75.09| 91.06/78.91 | 94.12/3.60 | |||AddSent| 89.07/90.54| 90.11/82.19| 93.68/6.32 | **Table 2: Recall of Poison Samples on Top 1% and 5% High Loss Samples** | Steps $\downarrow$ | (Precision /Recall) of the poisoned data on top 1% high loss | (Precision /Recall) of the poisoned data on top 5% high loss | |:---:|:--------:|:--------:| | 50 | 11.17% / 11.29% | 7.40% / 37.44% | | 100 | 9.85% / 9.95% | 9.11% / 46.06% | | 150 | 6.76% / 6.83% | 4.52% / 22.88% | | 200 | 7.20% / 7.28% | 8.67% / 43.83% | | 250 | 3.52% / 3.56% | 7.49% / 37.89% | | 300 | 0.73% / 0.74% | 5.58% / 28.23% | | 350 | 10.44% / 10.54% | 9.05% / 45.76% | | 400 | 4.41% / 4.45% | 4.96% / 25.11% | | 450 | 2.64% / 2.67% | 4.61% / 23.32% | | 500 | 0.00% / 0.00% | 0.97% / 4.90% | 550 | 0.44% / 0.44% | 1.17% / 5.94% | | 600 | 0.00% / 0.00% | 0.00% / 0.00% |
Summary: This paper first makes an observation that in a backdoor poisoning attack against a pretrained language model, the lower layers learn the backdoor feature quickly and easily. Based on this observation, the authors then design a honeypot-based defense that catches the training samples that could be learned with low loss by the lower layers, hoping that these would be poisoned. On the other hand, the loss from the training samples that cannot be learned easily by lower layers is upweighted, which defuses the backdoor. The authors evaluate their defense on multiple NLP classification tasks on multiple architectures and attacks, including adaptive attacks. Strengths: + The proposed defense is well reasoned based on empirical observations about how backdoor attacks are learned. + Detailed evaluation, considers stronger adaptive attacks as well and shows success. + The idea of probing hidden layers to make observations about the learning dynamics is interesting as most work focuses on looking at loss dynamics during training. Weaknesses: - The main idea the defense relies on is not a new one and there are already successful defenses that exploit this idea. The authors have not considered these defenses as a baseline. - Some additional experiments regarding the impact of the defense in low-ratio poisoning regimes would be informative to have. The defense essentially observes that backdoor poison samples are learned easily by the lower layers, and, as a result, a defense that isolates away the samples learned in lower layers can prevent the attack. This is a good observation but a very similar one has been made before toward a defense. For example, Anti-Backdoor Learning (ABL) by Li et al. has also made this observation and proposed a similar defense based on isolating easy-to-learn samples. A difference in the work under review is using the lower layers instead of loss dynamics during training as a measure of sample difficulty. However, it is known that these notions are very correlated (e.g., Deep Learning Through the Lens of Example Difficulty, by Baldock et al.), so this difference might not matter after all. This brings us to my first bullet point, the authors should've evaluated their work against ABL (or a more recent follow-up if that exists) as the proposed defense and ABL share their starting point. I'm not convinced that the proposed defense can improve upon ABL significantly without offering a different insight. That being said, the observation that backdoor poison samples are easy to learn is an artifact of the attack and its parameters. There are recent backdoor attacks that break defenses like ABL by crafting more difficult-to-learn poisons. For example, Narcissus: A Practical Clean-Label Backdoor... by Zeng et al. Lowering the poison percentage is a way to craft such poisons but the experiment provided in Section 5.3 is not enough to be convincing. Ideally, I would like to see a plot when you apply the DPR-AST attacks in Section 5.3 but you vary the poison percentage, starting from a percentage that achieves very low ASR. Essentially, the x-axis is the poison ratio and the y-axis is the ASR, and there are two curves, one when the honeypot defense is applied and the other for an undefended model. In particular, I would like to see if there is a regime where the undefended model achieves lower ASR than the defended model because the proposed honeypot defense starts boosting difficult-to-learn poison samples (which could be poison samples depending on the poison ratio). This way, we can have a better idea when the honeypot defense is viable and effective and when it is necessary to deploy another type of defense that makes different assumptions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How does the observation in Section 3 change if the poison ratio is varied (let's say between 0.1% to 10%)? Do you think the observation would persist in low poison percentages or the poison samples would not be learned easily in lower layers anymore? - Why do you think, in Figure 1, the CE Loss for poison samples increases over layers? This seems to conflict with the claim in Line 200 (easier samples, the majority of which are poisoned samples). If poison samples are easier, why do they seemingly become more difficult at the deeper layers? - What type of model did you use to train the probing classifiers in Section 3? Is it a simple linear model? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**: Anti-Backdoor Learning (ABL) by Li et al. proposed a similar defense based on isolating easy-to-learn samples. What is the difference between ABL and your proposed work? **R1**: Thank you for highlighting the ABL method. In general, ABL employs a two-stage gradient ascent mechanism in standard training: 1) isolating backdoor examples and 2) mitigating the backdoor with backdoor unlearning training. We argue that our method has fundamental differences from ABL, as follows. - **Pre-trained Model Assumption**: Due to constraints on the character count, we kindly direct you to **Q2 in our general response** for a detailed comparison. - **Impact of Model Structure in Backdoor Learning**: ABL and recent studies mainly relied on the disparities in loss and learning speed to differentiate between the backdoor and clean samples, which proved to be insufficient in several scenarios [5]. In contrast, **our findings emphasize the model structure may provide a more reliable detection**. Specifically, we've identified that the backdoor signal is considerably denser in the lower layers of the PLM embedding, enhancing our ability to distinguish between the two. This insight has the potential to augment numerous existing detection methods, enhancing their performance. --- **Q2**: There are recent backdoor attacks that break defenses like ABL by crafting more difficult-to-learn poisons. For example, Narcissus proposed in reference [4]. Lowering the poison percentage is a way to craft such poisons, but the experiment provided in Section 5.3 is not enough to be convincing. **R2**: Thank you for this insightful comment! - We argue that using difficult-to-learn poisons cannot break our defense. Firstly, we have shown that our method is resistant to three adaptive attacks in Section 5.3, where all of them can be regarded as difficult-to-learn poisons. Secondly, we argue that difficult-to-learn poisons still need to exploit low-level semantic features to create backdoors. Accordingly, our honeypot module can still capture them and thus alleviate their malicious effects. - Besides, Narcissus targeted image classification tasks and cannot be directly generalized to NLP tasks due to their significant differences. We are deeply sorry that we fail to conduct Narcissus due to the limitation of rebuttal time. We will add more details and discussions in the appendix of our revision. --- **Q3**: I would like to see a plot when you apply the DPR-AST attacks in Section 5.3, but you vary the poison percentage, starting from a percentage that achieves very low ASR. **R3**: Thank you for this constructive suggestion! We plot the DPR-AST attacks with the same setting in Section 5.3 and adjust for varying poison percentages, ranging from 0.1% to 10.0%. (Please refer to Figure 14 in our uploaded PDF). Specifically, **we found that there is no regime that the undefended model can achieve lower ASR than the defended model**, given that honeypot consistently manages to capture a portion of the poisoned samples compared to the no defended model. --- **Q4**: How does the observation in Section 3 change if the poison ratio is varied (let's say between 0.1% to 10%)? **R4**: Thank you for this insightful question! **The honeypot can effectively capture the backdoor signal when the poison ratio reaches a threshold where the stem model can learn the backdoor function.** This is because the backdoor signal is more concentrated in the lower layers compared to the top layers. We hereby conduct experiments on SST2 with AddWord attack at poison ratios ranging from 0.1% to 10.0%. As shown in the following table, **our method is highly effective under low poison rates**. **Table 2: The defense performance under extremely low poison rates.** | poison ratio $\rightarrow$ | 0.1% | 0.5% | 1.0% | 2.5% | 5.0% | 7.5% | 10.0% | |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |ACC| 93.83 | 93.81| 93.78| 93.71 |93.71 | 93.11 | 92.67 | |ASR|6.90 | 7.18 | 7.59 | 7.81| 6.56 | 6.77 | 6.30 | --- **Q5**: In Figure 1, the CE Loss for poison samples increases over layers. This seems to conflict with the claim in Line 200 (easier samples, the majority of which are poisoned samples). If poison samples are easier, why do they seemingly become more difficult at the deeper layers? **R5**: We are deeply sorry that our submission may lead you to some misunderstandings. **In lines 199-201, our assertion is not that poisoned samples are universally 'easier' samples. Instead, we emphasize that these samples become 'easier' only when leveraging representations from the lower layers.** Given that existing text backdoor triggers inevitably leave abundant information at the word, phrase, or syntactic level, these signal is more pronounced at the lower layer [3]. In the deeper layers, the embeddings predominantly carry semantic features, which diminishes the backdoor signal, and the poison sample becomes 'harder' to learn [2]. --- **Q6**: What type of model did you use to train the probing classifiers in Section 3? Is it a simple linear model? **R6**: Thank you for raising this question. In Section 3, we maintain consistency with our honeypot design for the probing classifiers. Specifically, we use a structure that consists of one transformer layer coupled with a simple linear classifier. While a simple linear model could potentially be sufficient, we plan to explore this in future work. --- **Reference** [1] Li, Yige, et al. "Anti-backdoor learning: Training clean models on poisoned data." NeurIPS 2021. [2] Biru Zhu, et al. "Moderate-fitting as a natural backdoor defender forpre-trained language models." NeurIPS 2022. [3] Ganesh Jawahar, et al. "What does bert learn about the structure of language? ACL 2019. [4] Zeng, Yi, et al. "Narcissus: A practical clean-label backdoor attack with limited information." arXiv 2022. [5] Xiangyu Qi, et al. "Revisiting the assumption of latent separability for backdoor defenses."" ICLR, 2023. --- Rebuttal Comment 1.1: Title: Thank you for your detailed response! Comment: I'm satisfied with the ABL experiments and the poison ratio experiments, I'm increasing my score. There's a deeper truth here. ABL isolates easy samples by judging their training loss dynamics. Honeypot uses layer-wise dynamics. These two notions are correlated in the experiments for CV tasks (from Baldock et al.). Low training loss -> learned at an earlier layer easily. However, it seems like this is not the case for fine-tuned PLMs. You gave an intuitive answer to this phenomenon in your general comment but I think it justifies more experiments to understand why. To me, the most interesting result is in Supplement Figure 6. The poison samples become higher loss in deeper layers relative to clean samples. I would not expect that, and it's not surprising that ABL fails in this case (because it judges difficulty using the last layer). This brings me to a version of ABL that I think will work: attach a probe to an earlier layer and apply the loss-based sample filtering based on the loss from this probe (not from the model's last layer output). I would love to see a deeper, more systemic understanding of this fundamental separation from our prior understanding of CV models. --- Reply to Comment 1.1.1: Comment: We would like to sincerely thank you again for your time and valuable comments. We hereby provide more insights and explanations. --- **NQ1:** A new version of ABL that I think will work: attach a probe to an earlier layer and apply the loss-based sample filtering based on the loss from this probe (not from the model's last layer output). **NR1:** Thank you for the insightful suggestion! In fact, the "probe" method closely mirrors the core idea behind our honeypot :) Following your suggestion, we conducted initial experiments by placing the probe at the lower layers of PLMs for backdoor sample isolation. We denote this new variant as ABL++. As illustrated in Table 1, **using the lower layer probe has shown promising enhancements to ABL (with an ASR < 30%)**. These results suggest that **structural differences in backdoor signals can be integrated with existing defenses to further boost their effectiveness**, especially for pretrained models. We will explore this phenomenon more deeply in our revised manuscript. **Table 1: The comparison to ABL and ABL++** | Model$\downarrow$ | Dataset$\downarrow$ | Attack$\downarrow$, Defense$\rightarrow$ | ABL (ACC/ASR) | ABL++ (ACC/ASR) | Honeypot (ACC/ASR) | |:-----:|:-------:|:------:|:-----:|:---:|:---:| |RoBERTa$_{base}$|SST-2|AddWord| 90.25/76.21 | 93.00/22.57 | 93.71/6.65| |||AddSent| 91.17/69.24 | 92.01/15.07 | 92.39/7.71| --- **NQ2:** I would love to see a deeper, more systemic understanding of this fundamental separation from our prior understanding of CV models. **NR2:** Thank you for highlighting this interesting and important point! - Although this paper mainly focuses on pretrained language models in NLP, we also recognize the significance of CV tasks. - To further study this interesting problem, we conduct preliminary experiments on CIFAR-10 with ResNet-50 pretrained on ImageNet. The results show that the **pre-trained CV model also initially concentrates on learning task-related features before backdoor-related features**. In other words, **this interesting inconsistent behavior is due to different training paradigms** (i.e., fine-tune pre-trained model v.s. training from scratch) instead of different tasks. We will include this in our updated supplementary. - We argue that previous backdoor attack/defense work has not identified this phenomenon mostly because they usually train CV models (e.g., VGG and ResNet) on datasets (e.g., CIFAR-10) from scratch. Accordingly, we believe our observations are critical to future defense research since CV tasks are currently also embracing the training paradigm of fine-tuning large foundation models. We will further explore this problem in CV tasks in our future works.
Summary: This paper proposes a method to defend against NLP backdoors during training. The proposed method works by using an honeypot module to absorb backdoor information, and prevent the backdoor behaviors to be learned by the stem network. Experiments on SST-2, IMDB, and OLID demonstrate the effectiveness of the propsoed method. Strengths: * The investigated problem is interesting. * The motivation of this paper is good. Weaknesses: * The proposed method is based on the observation that learning the backdoor task is generally easier than learning the main task. However, this observation may not always hold true. In the case of label-specific backdoor attacks (also called as all-to-all attak in BadNets [1]), where samples with different original labels have different target labels, the backdoor task becomes even more complex compared to the main task. To achieve the desired backdoor behavior in such attacks, the model must first identify the correct label of the backdoor samples before making backdoor predictions based on that recognized label. Unfortunately, this paper lacks a discussion and empirical results concerning label-specific attacks. As a result, the generalizability of the proposed method to different types of attacks remains unclear. * Comparison to related work CUBE [2] is missing. While this paper claims that the proposed method surpasses existing defenses, it fails to include a comparison to CUBE, a training-time textual backdoor defense method. It is recommended to incorporate a comparison with CUBE to provide a more comprehensive evaluation of the proposed method's performance. * The proposed approach essentially considers samples with W(x) values below the threshold value c as identified poisoning samples. To gain a deeper understanding of the proposed method's effectiveness, it is recommended to discuss the measures of detection precision and recall in detail. [1] Gu et al., BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain. arXiv 2017. [2] Cui et al., A Unified Evaluation of Textual Backdoor Learning: Frameworks and Benchmarks. NeurIPS 2022 Datasets & Benchmarks. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The limitations is discussed in the Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**: The proposed method is based on the observation that learning the backdoor task is generally easier than learning the main task. However, in the case of the all-to-all attack, where samples with different original labels have different target labels, the backdoor task becomes even more complex compared to the main task. **R1**: Thank you for this insightful comment! - In this paper, **we only consider all-to-one attacks simply following the settings used in almost all backdoor defenses in NLP**. - Arguably, **all-to-all attacks are less practical for attackers** since they are less controllable and harder to succeed compared to all-to-one attacks. - However, we do understand your concerns. Accordingly, we also compare defenses' performance under all-to-all attacks where the target label is set to $y' = (y + 1) \mod K$, where $K$ is the number of classes. We conduct experiments on SST-2 and AGNews datasets with the BERT model. As shown in the following table, **our method is better than chosen baseline defenses**. However, we also notice that **the performance of all defenses is significantly lower than that of the case in defending against all-to-one attacks**. This is because these attacks are harder to learn, as you suggest, and thus their attack signals are weaker for defenses to catch. We will explore how to better defend against them in our future work. **Table 1: Results on all-to-all attack** | Model | Dataset | BKI (ACC/ASR) | Onion (ACC/ASR) | CUBE (ACC/ASR) | Honeypot (ours) (ACC/ASR) | |:-----:|:-------:|:-----:|:-----:|:-----:|:-----:| |BERT$_{base}$|SST-2|91.62/91.05|89.25/69.24|90.12/59.09| **89.10/46.21** | ||AGNews|92.69/88.69|89.64/63.20|91.39/58.19|**90.67/39.78**| |BERT$_{large}$|SST-2|93.11/89.44|92.75/79.18|92.36/51.35|**91.85/46.67**| ||AGNews|93.40/91.22|91.58/70.32|93.01/60.10|**91.42/41.44**| --- **Q2**: It is recommended to incorporate a comparison with CUBE [1] to provide a more comprehensive evaluation of the proposed method's performance. **R2**: Thank you for this constructive suggestion! To further alleviate your concerns, we compare our method with CUBE on SST-2 and IMDB datasets under AddWord and AddSent. The results (from Table 1 in our General Response) show that **our method is more effective (with lower ASR)**. We will provide more details and experiments in the appendix of our revision. --- **Q3:** The proposed approach essentially considers samples with W(x) values below the threshold value c as identified poisoning samples. To gain a deeper understanding of the proposed method's effectiveness, it is recommended to discuss the measures of detection precision and recall in detail. **R3**: Thank you for this constructive suggestion! - In general, different from the backdoor unlearning method [2], which requires to detect of a small portion of poison samples, **our honeypot module intends to capture almost all poisoned samples (i.e., have a high recall) instead of correctly identifying poison samples (i.e., high precision)**. It is because poisoned samples are highly effective, and even a few remaining ones can still create hidden backdoors during model training. - Following your suggestions, we hereby calculate the precision and recall of detecting poisoned samples generated by AddWord and StyleBKD on the SST2 dataset. As shown in the following Table 2, **our method can consistently achieve high recall while maintaining reasonable precision across diverse thresholds**. Although the precision might not be as consistently high, the main experiments in our paper indicate that reducing the learning weight for those clean samples with low loss has minor effects in reducing the performance of the original task. **Table 2: The precision and recall of our honeypot module in detecting poisoned samples.** | c$\downarrow$ | AddWord (Precision / Recall) | AddSent (Precision / Recall) | |:---------:|:------------:|:-------------:| | 0.05 | 58.50 / 87.71 | 53.29 / 90.46 | | 0.1 | 32.60 / 99.28 | 34.37 / 99.54 | | 0.2 | 18.79 / 99.58 | 15.21 / 99.87 | **Reference** [1] Cui, Ganqu, et al. "A unified evaluation of textual backdoor learning: Frameworks and benchmarks." NeurIPS 2022. [2] Li, Yige, et al. "Anti-backdoor learning: Training clean models on poisoned data." NeurIPS 2021. --- Rebuttal Comment 1.1: Comment: Dear Reviewer 5e6e, Thank you once again for your valuable time and constructive comments. We would like to kindly inform you that we have addressed your concerns in our rebuttal by: **(1)** evaluating with the all-to-all attack, **(2)** comparing to more baselines, and **(3)** providing the results of detection precision/recall. As the reviewer-author discussion phase is nearing to the end, we want to check in with you to see if you have any further questions or concerns regarding our response. We are more than happy to answer any additional questions during the post-rebuttal period. Your feedback will be greatly appreciated. --- Rebuttal Comment 1.2: Comment: Thanks for your responses. After reading the rebuttal, my concerns about the underlying assumption of this paper (learning the backdoor task is easier than learning the main task) still remains. 1. I don't think all-to-all attack is absolutely less practical. Compared to all-to-one attack, all-to-all attack can make the model predict different target labels based on the selection of the original labels, while one all-to-one trigger only has one fixed target label. Consider a sentiment classification scenario, the all-to-all trigger can flip the positive prediction to negative and also convert the negative prediction to positive, while the all-to-one trigger is only associate to one of the sentiment. In this case, all-to-all attack is actually more practical and powerful than the all-to-one trigger. Also, the attack success rates of the all-to-all attack on the well-trained models are typically above 90%, which are high enough. At least, all-to-all attack is an important type of attacks in the machine learning backdoor field. 2. There are many NLP backdoor defenses that are able to defend against all-to-all attack, such as Liu et al. and Shen et al. NIST TrojAI competition (https://pages.nist.gov/trojai/) also includes all-to-all attack and label-specific attack in the NLP rounds. 3. This paper claims the proposed method is robust to the adaptive attack. However, the all-to-all attack is actually a simple yet effective adaptive attack that diminish the performance of the proposed attack significantly, which makes the claim (i.e., robustness on the adaptive attack) might not be entirely accurate. The observations and the assumptions behind the proposed method is theoretically contradict to the ability for defending against all-to-all attack or label-specific attack, which might be a nonnegligible technical flaw. Based on these concerns, I keep my score. Liu et al., "PICCOLO : Exposing Complex Backdoors in NLP Transformer Models" in IEEE Symposium on Security and Privacy 2022. Shen et al., "Constrained Optimization with Dynamic Bound-scaling for Effective NLPBackdoor Defense" in ICML 2022. --- Reply to Comment 1.2.1: Title: Rebuttal regarding all-to-all attack by Authors Comment: Thank you for your insightful feedback. In light of them, we'd like to provide a clearer explanation to address potential misconceptions regarding our defense setting and further alleviate your concerns for the all-to-all attack. - We deeply appreciate your in-depth analysis of the all-to-all attack's practicality. Indeed, we recognize and value the inherent advantages of all-to-all attacks, especially in specific contexts like sentiment classification. - It appears that there might be some misunderstandings regarding our threat model. The referenced papers [1,2] are primarily tailored for backdoor detection tasks, where given a suspect model and a handful of benign samples, the aim is to detect or reverse-engineer the backdoor trigger. This inverted trigger is then utilized for backdoor unlearning. Yet, as outlined in our paper (lines 114-116), **our method diverges notably from these two-stage backdoor removal efforts. We don't rely on a clean dataset, instead advocating for a training-time defense where the model remains benign even when trained on a tainted dataset**. Most studies under our threat model, such as [3-7], primarily delve into the all-to-one attack, and our preliminary evaluations (as outlined in Table 1) suggest that existing defense approaches struggled to counter the all-to-all attack effectively. - To further address your valid concerns, we conducted experiments and revealed an interesting phenomenon: the all-to-all attack actually enhances our study's findings. Specifically, **we found a clear two-stage learning process that PLMs initially concentrate on task-related features and then shifting their attention to backdoor features**. This behavior stems from the inherent nature of the all-to-all attack: models need to first learn the primary task before delving into the backdoor task. This clear separation led us to suggest a straightforward solution: employing early stopping to prevent the model from learning the backdoor features. **As depicted in Table 1, implementing early stopping substantially reduce the ASR to around 10% while only marginally affecting original task performance**. These initial findings indicate that within the NLP realm, the all-to-all attack might not be as formidable as presumed. Furthermore, our method still shows the ability for defending against all-to-all attack. We're eager to delve deeper into all-to-all attacks and conduct more extensive experiments in the future. Once again, we're grateful for your valuable insights, and we hope this offers a clearer perspective on our work. **Table 1: Results on all-to-all attack** | Model | Dataset | BKI (ACC/ASR) | Onion (ACC/ASR) | CUBE (ACC/ASR) | Honeypot (ACC/ASR) | Honeypot+Early stopping (ACC/ASR) | |:-----:|:-------:|:-----:|:-----:|:-----:|:-----:|:-----:| |BERT$_{base}$|SST-2|91.62/91.05|89.25/69.24|90.12/59.09|89.10/46.21|88.64/14.33| ||AGNews|92.69/88.69|89.64/63.20|91.39/58.19|90.67/39.78|90.73/9.61| |BERT$_{large}$|SST-2|93.11/89.44|92.75/79.18|92.36/51.35|91.85/46.67|90.36/9.63| ||AGNews|93.40/91.22|91.58/70.32|93.01/60.10|91.42/41.44|90.69/7.25| **References** [1] Yingqi Liu, Guangyu Shen, et al., "PICCOLO : Exposing Complex Backdoors in NLP Transformer Models" in IEEE Symposium on Security and Privacy 2022. [2] Guangyu Shen, Yingqi Liu, et al., "Constrained Optimization with Dynamic Bound-scaling for Effective NLPBackdoor Defense" in ICML 2022. [3] Yige Li, Xixiang Lyu, et al. "Anti-backdoor learning: Training clean models on poisoned data." NeurIPS 2021 [4] Fanchao Qi, Yangyi Chen, et al. "ONION: A Simple and Effective Defense Against Textual Backdoor Attacks." EMNLP 2021 [5] Ganqu Cui, Lifan Yuan, et al. "A unified evaluation of textual backdoor learning: Frameworks and benchmarks." NeurIPS 2022. [6] Biru Zhu, Yujia Qin, et al. "Moderate-fitting as a natural backdoor defender forpre-trained language models." NeurIPS 2022 [7] Xiangyu Qi, Tinghao Xie, et al. "Revisiting the assumption of latent separability for backdoor defenses." ICLR, 2023.
Rebuttal 1: Rebuttal: # General Response to All Reviewers We sincerely thank the reviewers for dedicating their time and providing invaluable feedback. We present a general reply below in response to the concerns raised regarding baseline comparisons. **Q1**: Comparison with more baselines. **R1**: Thank reviewers for this constructive suggestion! - To address the reviewer's concerns, we compare our method with two additional baselines (i.e., CUBE [1] and ABL [2]). We conduct experiments on SST2 and IMDB datasets with the RoBERTa model, using AddWord and AddSent attacks. As shown in the following table, **our Honeypot method is more effective than baselines with lower ASR and higher ACC**. **Table 1: The comparison to CUBE and ABL** | Model$\downarrow$ | Dataset$\downarrow$ | Attack$\downarrow$, Defense$\rightarrow$ | CUBE (ACC/ASR) | ABL (ACC/ASR) | Honeypot (ours) (ACC/ASR) | |:-----:|:-------:|:------:|:-----:|:---:|:---:| |RoBERTa$_{base}$|SST-2|AddWord| 92.32/17.34 | 90.25/76.21 | **93.71/6.65** | |||AddSent| 92.48/27.25 | 91.17/69.24 | **92.39/7.71** | ||IMDB|AddWord| 91.58/28.76 | 92.59/87.14 | **93.72/5.60** | |||AddSent| 92.12/36.41 | 89.75/88.77| **92.72/6.56** | |RoBERTa$_{large}$|SST-2|AddWord| 94.09/18.67 | 92.03/74.98| **94.15/5.84** | |||AddSent| 94.32/24.92 |91.77/67.05 | **94.83/4.20** | ||IMDB|AddWord| 93.68/17.85 | 92.59/75.09| **94.12/3.60** | |||AddSent| 93.50/23.88 | 89.07/90.54| **93.68/6.32** | - **Explaining the Results of ABL:** In Table 1, we found that ABL only achieves disappointing results with an ASR higher than 70%. To shed light on this outcome of ABL, we assessed the backdoor isolation capabilities of ABL. Following the setting in the ABL paper, we initiated a hyperparameter search where $\gamma$ denotes the loss threshold and $T_{te}$ stands for the epochs of the backdoor isolation stage. Table 2 presents the detection precision of the 1% isolated backdoor examples, which is crucial for the ABL backdoor unlearning performance. However, **our findings reveal that the percentage of poisoned samples is less than 20%, which accounts for ABL's suboptimal performance.** **Table 2: The isolation precision (%) of ABL.** | $\gamma$ $\downarrow$ $T_{te}$ $\rightarrow$ | 1 epoch | 5 epochs | 10 epochs | |:---:|:---:|:---:|:---:| |0.5| 2.1 | 11.7 | 13.5 | |1.0| 5.1| 12.3 | 15.3 | |1.5| 5.5 | 12.4 | 15.6| --- **Q2:** Why does Honeypot perform better than ABL? **R2:** The ABL method primarily relies on the observation that 'models learn backdoored data much faster than they do with clean data' [2]. However, it is crucial to note that **this assumption mainly holds for models trained from scratch in computer vision tasks**. Our research and reference [3] both demonstrated an opposite behavior that **pre-trained language models first concentrate on learning task-specific features before backdoor features**. A plausible explanation for this behavior is the richness of semantic information already present **in the top layers** of the pre-trained language models. Thus, the original task becomes more straightforward compared to the backdoor functionality, causing the model to prioritize learning the main task first. As a result, ABL struggles to yield satisfactory detection performance during the backdoor isolation stage by selecting the "easy-to-learn" samples (as shown in Table 2), and we show that ABL obtains a high ASR in the following backdoor unlearning process (as shown in Table 1). In contrast, our findings underscore the significance of examining model structure when identifying backdoor samples, revealing that backdoor samples become more identifiable in the lower layers of PLMs. **Reference** [1] Cui, Ganqu, et al. "A unified evaluation of textual backdoor learning: Frameworks and benchmarks." NeurIPS 2022. [2] Li, Yige, et al. "Anti-backdoor learning: Training clean models on poisoned data." NeurIPS 2021. [3] Biru Zhu, et al. "Moderate-fitting as a natural backdoor defender forpre-trained language models." NeurIPS 2022. Pdf: /pdf/8d3242ce46cb329162ac7a5cfa168f08e75d77f4.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Towards Accelerated Model Training via Bayesian Data Selection
Accept (poster)
Summary: This paper followed up reducible hold-out loss selection (RHO-LOSS) method for data selection and improved it by proposing a reasonable approximation for the non-trivial objective function and eliminating the need for extra hold-out data. Strengths: 1. Authors present both theoretical results (the lower bound for hard-to-estimate objective) and empirical results (algorithm and experiment results). Weaknesses: 1. I am not fully convinced by the paper that their proposed method out-performs existed SOTA, namely RHO-LOSS much, specifically from the results in table 1 and figure 1. I wonder if authors can provide more intuitive comparisons between the proposed method the RHO-LOSS, either empirically show effectiveness and efficiency of the proposed method, or theoretically show the proposed lower bound is more solid than approximation used by RHO-LOSS. 2. The ablation study needs more experiment result (more datasets and more model structure) to get valid conclusion. I also wonder why compare with baseline instead of RHO-LOSS. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In addition to questions addressed in weakness section, I have following questions for the authors: 1. How is target accuracy selected? Specifically for table 3, why select 30% and 40% as target accuracy? 2. I am curious if each training epoch for RHO-LOSS and your proposed method take same amount of time considering they have different data selection steps? 3. It seems the proposed method performs well on classification datasets with large number of classes (CIFAR 100, ILSVRC12 and WebVision datasets) compared with dataset with smaller number of classes (CIFAR 10). I wonder authors can share some ideas from the perspective of the designed algorithm about this. 4. I wonder if the authors have look into how zero-shot predictor affects the method performance. In other word, is the proposed method robust to poorly-performed predictor? If not, how much will the model be impacted? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, authors have addressed their limitation, namely the method performance hugely depends on effectiveness of the zero-shot predictor. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for providing valuable comments. Below, we address each concern in detail, and we sincerely hope that our response proves satisfactory and leads to a higher score. **Q1: More intuitive comparisons between the proposed method and RHO-LOSS** **A1:** We first clarify that theoretically, our method builds a more reliable approximation to the selection objective in Eq 4 than RHO-Loss. In particular, RHO-LOSS proposes approximating $\log p(y|x, D^∗, D_{t−1})$ with $\log p(y|x, D^∗)$ and approximating the posterior predictive with the point-estimate models’ predictions, which are both less principled. In comparison, we maintain a Bayesian perspective and develop a lower bound, i.e., Eq 9. Our selection objective is intriguing for several reasons: 1) the first term and $\alpha$ times the third term perform expectation and logarithm operations in reverse order, resulting in a quantity similar to the uncertainty estimates in Bayesian deep learning [39]; 2) the third term helps to prevent the selection of redundant samples; 3) the second term prioritizes points with high semantic alignment with their annotations since the validation data follows the ground truth data generating distribution. These three forces are integrated adaptively to accelerate model training across various stages. In the aspect of empirical comparison, apart from the results reported in Tab. 1-3, the results in Fig. 1 are insightful. Specifically, Fig. 1a evidences that data selected by our method has a lower label noise. Fig. 1b shows that our method selects slightly fewer redundant data points than RHO-LOSS. We will add these clarifications to the revision and respectfully ask the reviewer to re-evaluate our paper. **Q2: Regarding ablation study** **A2:** We provide clarification to the raised concerns. Firstly, our experiments in sections 4.1-4.4 have already demonstrated the effectiveness of our method across various datasets, including normal, noised, imbalanced, and web-scraped data. Therefore, our ablation studies do not specifically focus on this aspect. Secondly, we have conducted ablation studies on both the architecture of the trained model (Fig. 2) and the architecture of the zero-shot predictor (Fig. 3). These ablation studies have already established the generality and versatility of our method. Since we have already validated the superiority of our method over RHO-Loss in the main experiments and the purpose of the ablation studies is to gain a better understanding of the behavior of our approach, there is no need to compare with RHO-Loss in the ablation. To summarize, our empirical evaluations and ablation studies provide substantial evidence to support the effectiveness of our method, so we respectfully disagree with the assessment that our current conclusion lacks proper support. **Q3: How is target accuracy selected?** **A3:** We follow RHO-LOSS [31] to select the target accuracy on CIFAR. In the case of WebVision, which is not covered by [31], we select 30% and 40% according to the final accuracies of RHO-Loss ($\approx$ 40%) and those of our approach ($\approx$ 60%). **Q4: Per-epoch training time** **A4:** Thanks for the suggestion. Here we provide a direct comparison of per-epoch training time between RHO-LOSS and our method on CIFAR-100: |Method | Per-epoch training time | | -------- | -------- | | Uniform | 14s | | RHO-LOSS | 18s | | Proposed | 21s | As shown, our approach has only slightly increased per-epoch time over RHO-Loss. It arises from that we use a CLIP-ResNet50 zero-shot predictor to compute the second term in Eq 16, whereas [31] uses a ResNet18 validation model. In fact, this gap can be reduced by a simple implementation trick—pre-computing the CLIP-ResNet50/ResNet18 predictions for each sample in the training set before training (this is done in [31] but not employed in our reproduction). According to Tab. 1, compared to RHO-Loss, we can reduce the number of epochs required to reach 40% accuracy from 48 to 32, and for 52.5% from 77 to 53. Combining these results, we see a $48 * 18 / 32 / 21 = 1.29$ or $77 * 18 / 53 / 21=1.25$ times practical acceleration. **Q5: Clearer performance gain on datasets with a larger number of classes** **A5:** This is because the difficulty of the classification problem is directly correlated with the number of classes involved. This is evident in the final accuracy, where achieving approximately 90% accuracy on CIFAR-10 is relatively straightforward while obtaining 60-70% accuracy on the other datasets poses a greater challenge. So, the conclusion is that our method surpasses the baselines more obviously on more difficult problems. The WebVision dataset setting closely resembles real-world scenarios, and the substantial performance improvement demonstrated by our method over the baselines on it validates our practical value. **Q6: Is the proposed method robust to poorly-performed predictor?** **A6:** Yes. As validated in Fig. 3 and stated in L267-269, although the zero-shot accuracy of the CLIP-RN50 variant is significantly lower than that of CLIP-ViT-B/16, the speedup effect of the two zero-shot predictors is similar. Note that CLIP-RN50 is the smallest model in the CLIP family, while CLIP-ViT-B/16 is rather large. This finding suggests the robustness of our method against the selection of the zero-shot predictor. We further try to replace the zero-shot predictor in our method with the validation model in RHO-Loss. The results on CIFAR-100 are listed below: |Method | Epochs to reach 40.0% |Epochs to reach 52.5% |Final acc. | | -------- | -------- |-------- |-------- | | RHO-Loss| 48 | 77| 61| | Proposed - zero-shot predictor + validation model from RHO-Loss | 30 | 52 | 63 | | Proposed | 32 | 53| 63| The above comparison double confirms the robustness of our method and directly proves the Bayesian treatment in our method is beneficial. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed responses to my reviews. After carefully reading their response, I decide to raise my score to 5. --- Reply to Comment 1.1.1: Title: Thanks Comment: Thank you for the raised score. We will carefully revise our paper to include the discussions in the rebuttal. Thank you again!
Summary: The paper builds on recently proposed work that accelerates training through online batch selection using generalisation loss as selection criterion, by using LaPlace approximation for a stronger Bayesian approximation and using off-the-shelf pre-trained models. The paper presents theory deriving their selection function and then demonstrating its efficacy on a number of datasets in the vision setting. Strengths: - Novelty on top of RHO-Loss: (1) using pre-trained models as validation model, (2) LaPlace approximation for stronger Bayesian approximation than point-estimate models. - Well-written paper, that is clean and easy to read. - Good ablations for how to tune the introduced zero-shot predictor as a validation model. - Decent performance gains over baselines. Weaknesses: - The novelty over RHO-Loss is appreciated, but it should be corrected in the narrative that RHO-Loss does not require access to clean holdout data. This is incorrect as far as I can tell, and Mindermann et al. note this in their paper. - Would be nice to see additional datasets and settings as per the original paper (Language and Clothing1M, however WebVision seems to be a bigger/better dataset) - In the practical acceleration/algorithm section they note that it's not much slower than [31]. I think a more empirical measurement of the difference would be appreciated, further more how much time does this method add over normal training? Does it really accelerate training in terms of time? - It's not clear whether the zero-shot predictor or the laplace approximation is providing the gain? An ablation here seems necessary. If it's just the zero-shot predictor, it's seems like there's not much of a difference here besides slapping a pre-trained model on as a validation model? Technical Quality: 3 good Clarity: 3 good Questions for Authors: As per above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the acknowledgment of the novelty of our method in comparison to RHO-Loss, as well as its good performance. Below, we provide detailed responses to the specific comments, hoping that you find them satisfactory and raise your score accordingly. **Q1: RHO-Loss does not require access to clean holdout data** **A1:** We apologize for the lack of rigor in our previous statements. We have incorrectly understood that the validation model in RHO-Loss relies on *clean* holdout data (especially in the scenarios with label noise). We sincerely appreciate the reviewer for bringing this to our attention, and we will make the necessary revisions to our paper accordingly. The main text of [31] presents that RHO-Loss requires training validation models on a separate set of holdout data. We also note that RHO-Loss can work by splitting the training set into two halves and training a validation model on each half to score samples in another half. However, we argue that training additional validation models can be costly and should be performed repeatedly for new tasks. We assure you that the majority of the arguments presented in our original manuscript remain valid, and will add the above clarifications in the next version. **Q2: Additional datasets and settings as per the original paper ((Language and Clothing1M)** **A2:** As mentioned by the reviewer, the used WebVision dataset is more challenging than Clothing1M, so the current empirical study in this regard is convincing. On the other hand, we acknowledge that tasks such as CoLA and SST-2 are relatively straightforward in the field of NLP. Considering recent advancements, such as the GPT and LLaMA series of models for zero-shot recognition, we have not yet conducted experiments on these tasks. We understand that including empirical studies on these tasks is important, so we plan to include them in our final version due to the time constraints during the rebuttal period. Furthermore, we want to emphasize that our current empirical evaluations strongly support the superiority of our method over RHO-Loss. **Q3: How much time does this method add over normal training? Does it really accelerate training in terms of time?** **A3:** Thanks for the advice. The wall clock time is indeed an important evaluation metric. Here we provide a direct comparison of per-epoch training time between RHO-LOSS and our method on CIFAR-100: |Method | Per-epoch training time | | -------- | -------- | | Uniform | 14s | | RHO-LOSS | 18s | | Proposed | 21s | As shown, our approach has only slightly increased per-epoch time over RHO-Loss. It arises from that we use a CLIP-ResNet50 zero-shot predictor to compute the second term in Eq 16, whereas [31] uses a ResNet18 validation model. In fact, this gap can be reduced by a simple implementation trick—pre-computing the CLIP-ResNet50/ResNet18 predictions for each sample in the training set before training (this is done in [31] but not employed in our reproduction). According to Table 1, compared to RHO-Loss, we can reduce the number of epochs required to reach 40% accuracy from 48 to 32, and for 52.5% from 77 to 53. Combining these results, we see a $48 * 18 / 32 / 21 = 1.29$ or $77 * 18 / 53 / 21=1.25$ times practical acceleration. **Q4: It's not clear whether the zero-shot predictor or the Laplace approximation is providing the gain** **A4:** Thanks for the constructive comment. We clarify that both the zero-shot predictor and the Bayesian treatment contribute to the success of our method. These factors are integrated into Eq 9 and are traded off by the coefficient $\alpha$. Comparing the curves corresponding to $\alpha=0$ and $\alpha=0.2$ in Figure 4a, we witness that using only the zero-shot predictor is suboptimal and would lead to degraded final accuracy (as stated in L275). On the other hand, too large $\alpha$ (i.e., lightening the second term in Eq 9) can lead to a significant drop in training speed and final performance, which emphasizes the importance of the zero-shot predictor. We have also ablated on the zero-shot predictor in Figure 3, which proves the robustness of our method against it. We further try to replace the zero-shot predictor in our method with the validation model in RHO-Loss. The results on CIFAR-100 are listed below: |Method | Epochs to reach 40.0% |Epochs to reach 52.5% |Final acc. | | -------- | -------- |-------- |-------- | | RHO-Loss| 48 | 77| 61| | Proposed - zero-shot predictor + validation model from RHO-Loss | 30 | 52 | 63 | | Proposed | 32 | 53| 63| The above comparisons with RHO-Loss confirm the necessity of introducing the Bayesian treatment. --- Rebuttal Comment 1.1: Title: Response Comment: Q1-Q3. Noted. Q4. So perhaps I'm slightly confused here, but it seems like all of the gain is from subbing in a stronger model (zero-shot predictor), nothing from what is actually proposed in this paper (in fact it loses performance in two cases shown). From RHO-LOSS paper this seems to be an obvious perhaps even implied step? If my understanding is incorrect, then the experiment I asked for remains: RHO-LOSS with CLIP/zero-shot predictor vs RHO-LOSS and proposed model. --- Reply to Comment 1.1.1: Title: Thanks for your reply Comment: We apologize for any confusion caused by our previous reply. We would like to provide further clarification regarding the method **Proposed - zero-shot predictor + validation model from RHO-Loss**. This method specifically refers to the variant of our method that utilizes the validation model of RHO-Loss to **take the place of the zero-shot predictor**. It is important to note that the key distinction between this method and RHO-Loss lies solely in the selection principle. The significant disparities in training speed and final performance serve as evidence that our selection principle is indeed effective. We acknowledge that including a baseline that combines RHO-Loss with a zero-shot predictor could be beneficial in double-checking this point. The corresponding results of RHO-Loss based on our codebase are listed below: |Method | Epochs to reach 40.0% ACC $\downarrow$ |Epochs to reach 52.5% ACC $\downarrow$ |Final ACC. (%) $\uparrow$ | | -------- | -------- |-------- |-------- | | RHO-Loss w/ zero-shot predictor (CLIP-RN50)| 59 | 92 | 58 | | RHO-Loss w/ zero-shot predictor (CLIP-ViT-B/16)| 53 | 86 | 60| | Proposed | 32 | 53| 63| Given these, we would like to ask the reviewer to re-evaluate our contributions. We welcome any further comments.
Summary: This paper studies data selection methods because training examples may be of different importance/quality. By selecting a subset of high-quality/high-usefulness examples, the model’s performance can be improved when training on this subset. In Particular, the paper proposed a method by leveraging a light-weight Bayesian treatment and incorporating off-the-shelf zero-shot predictors. Strengths: 1. The paper is clearly motivated. 2. The proposed method is well explained. 3. The experiments cover different baselines, models and datasets. Weaknesses: The problem of data selection has been studied extensively in the ML community, which verifies the importance of this problem. Some recent works ([1,2] from reference list below) proposed data selection methods that do not only focus on hard/easy examples, but also consider data distribution and its impact on the overall loss (e.g., see Theorem 1 from [2]). It looks like these methods may already have partially addressed the problem studied in this paper. It would be better if the authors could include some discussion about these methods and possibly compare against them in the experiments. Reference: [1] Xia, Xiaobo, et al. "Moderate coreset: A universal method of data selection for real-world data-efficient deep learning." ICLR 2023. [2] Zheng, Haizhong, et al. "Coverage-centric Coreset Selection for High Pruning Rates." ICLR 2023. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see my comments in Weaknesses Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your supportive reviews and for finding our work clearly motivated and well-explained. We address the detailed concerns below. **Q1: Some discussion and comparison with recent works** **A1:** Thanks for the constructive suggestion. The problem of data selection is indeed important and well-studied in the ML community, but we clarify that our method makes novel, valuable contributions. In particular, the mentioned papers operate in a different setting from our work: they focus on coreset selection while we perform the online batch selection. More specifically, reference [1] contributes a data scoring mechanism that is robust to the change of scenarios for coreset selection, and reference [2] makes an in-depth understanding of the catastrophic accuracy drop issue of one-shot coreset selection and contributes a novel solution to it. Compared to them, our method selects valuable samples based on their marginal influence on the model’s generalization loss during training, so samples with different properties can be selected at various training stages. As a result, we can boost not only the final accuracy but also the training speed. This can be a major difference. Moreover, our selection is dependent on the model in training, while the mentioned two papers are not, so these approaches are expected to handle various scenarios in practice. Anyway, we will add more thorough discussions of these works in the revision and attempt to include direct empirical comparisons.
Summary: This works is situated in the field of **robust generalization**; in particular it studies the problem of how models that were trained on noisy or imbalanced data perform on clean data. They achieve this via **online batch selection**, and in particular they contribute to the development of a **Bayesian framework** for searching for the best datapoints to use in each mini-batch among some canditate points sampled from the training set. In more details, they extend a SotA method, called **RHO-LOSS** [1]. The authors pinpoint some approximation choices that [1] makes in order to make the principled Bayesian approach to data selection practical. As a result, they design an algorithm which, in contrast to [1], they claim 1) to make **less crude approximations about predictive posteriors**, by using variational Laplacian approximations, MC sampling (and other) instead of pure point estimates, and 2) **not to rely on the existence of oracle clean data** during the training of their model. Their experiments demonstrate **improvements in robust generalization and training acceleration for image classification tasks** over other methods in the literature (incl. [1]). [1] Mindermann, Sören, et al. "Prioritized training on points that are learnable, worth learning, and not yet learnt." International Conference on Machine Learning. PMLR, 2022. Strengths: The authors contribute to an important direction for robust training methods, in particular developping a bit further the Bayesian data selection framework. 1. The paper is well-written and motivated. The derivations are easy to follow and the deferrals to the appendix are used appropriately. Presentation can further be improved if there was somewhere a paragraph (perhaps in the appendix) with the list of approximation steps taken (for future reference), e.g. MC sample of a variational posterior, based on Laplace approximation of the last layer of a neural network (for which the backbone is taken as a point estimate), and the Laplace approximation further does not use the true Gaussian, but a Gauss-Newton matrix + potentially a KFAC approximation of it. 2. Authors identify correctly potential improvement points of [1], and proceed in mitigating them. In this way, they introduce a novel approximation scheme to the Bayesian framework of data selection, which is effective and computationally-lightweight. 3. Experiments seem to advocate in favor of using their method over [1] or other methods, they are extensive and on various dataset scales in image classification tasks. Weaknesses: 1. Proposed method overly claims that it does not use oracle clean data, however it might still depend on clean data via the unsupervised pre-trained zero-shot classification proxy that they use. In particular, CLIP-R50 might have been trained in a superset of datasets (like ImageNet) which contains CIFAR10/100. In that case, information from oracle clean data has been stored in the pretrained model. While I believe that this is a weaker assumption, there needs to be an explicit statement of this assumption and a quantitative assessment of this potential pitfall. To which extent would a pretrained model on the same noisy/imbalanced dataset as the benchmark task be useful? Nonetheless, the authors acknowledge this potential limitation of an underperforming pretrained model at the last section. The following (lightweight) experiments can be further performed to augment their arguments or awareness of the limitation: - A zero-shot CLIP baseline is provided, I think they should also provide with a finetuned version using linear probing and training on imbalanced/noisy CIFAR10/100 with uniform sampling. This way it will be more clear that the improvement are more due to their method and not because of an overpowered pretrained model. - Consider CIFAR10/100 experiments using an unsupervised pretrained model, with linear probing or kNN (so that it is zero-shot if you want that) for the oracle model, via for instance some simple SSL method like MoCov2 [2] on the same noisy/imbalanced training set. I will increase the assessed score if such experiments are performed and reported. 2. Ablation study reveals sensitivity to some hyperparameters. As a result, model selection and hyperparameter tuning are important for the success of the method. How was model selection performed? Was the validation split iid to the training set or was it a clean/balanced dataset? This needs to be reported clearly, but not addressed in this paper. 3. It would be nice to have a more extensive ablation study on the effects of the approximation choices in terms of final test accuracy, spanning from crude approximation schemes to the one finally used by the authors. [2] Chen, Xinlei, et al. "Improved baselines with momentum contrastive learning." arXiv preprint arXiv:2003.04297 (2020). Technical Quality: 3 good Clarity: 3 good Questions for Authors: ### Questions * (related to weakness 2) How was model selection performed? * How were target accuracies chosen for training acceleration results? * Lines 151-152: What do we miss by replacing the Hessian matrix in the Laplacian approximation by the Gauss-Newton? * Lines 131-132 about “the recent trend of exploring the potential of pretrained models” (in robust generalization) needs some citations. * What does Table 4 in the Appendix refer to? Please complete the appendix to explain. ### Typos * y-axis on Fig.2,3 and 4 are not in % of test accuracy as indicated. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors derive a novel approximation scheme to Bayesian data selection for training robust models (Strength 2), with favorable results over past literature in image classifiacation tasks (Strength 3) and the writing is overall great (Strength 1). There is a concern about the applicability of the method in cases where a pretrained model does not exist for the task at hand, in which case we need to find another proxy or probably pretrain it (Weakness 1), and how much actually the method makes no use of priviledged clean/balanced datasets for training (Weakness 1) and validation (Weakness 2). The limitation about the pretrained model is mentioned briefly, but it would be appreciated if it is expanded and experimented on with the ablation studies suggested in the Weakness section above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your positive review and recognition of the presentation, novelty, and effectiveness of this work. Below, we provide answers to the specific questions raised. **Q1: To what extent does the effectiveness of this work stem from the effectiveness of the CLIP-based zero-shot predictor that was used?** **A1:** Thanks for the comments and suggestions. We make the following clarification. Firstly, we clarify that the zero-shot predictor used in our work contains valuable information regarding CIFAR while *not* being trained on a superset of CIFAR like ImageNet. Specifically, CLIP has been trained on a web-scale collection of image-text pairs using an *aligning loss*. Secondly, we emphasize that large-scale pre-trained models like CLIP, BLIP, SWAG, and others have become essential infrastructure for modern AI. These models are openly available, freely accessible, and generally applicable. In this regard, our method is more favorable than RHO-Loss [31]. As mentioned in our paper and noticed by the reviewer, our method does not hinge on a performant zero-shot predictor. For example, in Tab. 1, a CLIP zero-shot predictor with 75.6% accuracy on CIFAR-10 leads to a final accuracy of 91% for our method. As shown in Fig. 3, the speedup effect of CLIP-RN50 is similar to that of CLIP-ViT-B/16 on CIFAR-100, while the former is significantly weaker. We comment that we use the zero-shot predictors directly without tuning in all the experiments reported in our paper. We conduct a set of experiments following the reviewer’s suggestion—linear probing using CLIP models on the clean/noisy CIFAR-10/100 with uniform sampling. The table below displays the comparison of the final accuracy (the comparison on convergence speed is not sensible because this baseline uses pre-trained weights while the methods listed in Table 1&2 have not): | Method/dataset | CIFAR-10 | CIFAR-10* | CIFAR-100 | CIFAR-100* | | ------------------------------------ | -------- | --------- | --------- | ---------- | | Linear probing with uniform sampling | 84.5 | 84.1 | 58.5 | 57.8 | | Proposed | 91.4 | 91.3 | 63.3 | 61.4 | We can see that the proposed method outperforms linear probing using CLIP on both clean (CIFAR-10/100) and data with 10% symmetric label noise (CIFAR-10*/100*). We also add a new experiment where we replace the zero-shot predictor in our method with the validation model used in RHO-Loss. The results on CIFAR-100 are listed below: |Method | Epochs to reach 40.0% |Epochs to reach 52.5% |Final acc. | | -------- | -------- |-------- |-------- | | RHO-Loss| 48 | 77| 61| | Proposed - zero-shot predictor + validation model from RHO-Loss | 30 | 52 | 63 | | Proposed | 32 | 53| 63| The above comparison confirms the necessity of introducing the Bayesian treatment and highlights the superiority of our method over RHO-Loss. **Q2: How was model selection performed?** **A2:** We split the original training set into training and validation sets, where the latter remains clean and balanced for model selection. In fact, as shown in Figure 4, the trade-off coefficient $\alpha$ in the selection objective is the primary factor that impacts the training curve and should be carefully selected. In particular, we select it from $\{0.1, 0.2, 0.3, 0.4\}$ (see L207) using a small validation set (of size 500 on CIFAR). We reuse the selected $\alpha$ on WebVision-100 without tuning. We’ll make these points clearer. **Q3: A more extensive ablation study on the effects of the approximation choices in terms of final test accuracy** **A3:** Thanks. We will add a more in-depth empirical analysis of our method in the revision. The core of our method lies in the computation of $\log p(y|x, D^∗, D_{t−1}) - \log p(y|x, D_{t−1})$. In particular, we approximate $\log p(y|x, D^∗, D_{t−1})$ with its lower bound for tractability, and approximate the predictive built on validation data with zero-shot predictors. These two choices are our major technical contributions and differences from RHO-LOSS [31]. The former cannot be trivially ablated and the latter has been well studied by us. For Bayesian treatment, we deploy online Laplace approximation for the posterior update, GGN approximation to avoid ill-posed Hessian, and last-layer KFAC approximation for tractability. These choices are essential. For example, introducing variational inference or MCMC for posterior inference will cause substantially higher costs and implementation challenges. In fact, RHO-LOSS [31] is a counterpart of our approach that uses less principled and less reliable approximations. This is verified by our existing experiments. **Q4: How were target accuracies chosen for training acceleration results?** **A4:** Those in regular and noisy-label experiments follow RHO-LOSS [31]. Those in imbalanced experiments are selected according to the final accuracies achieved by our method and the baselines. **Q5: What do we miss by replacing the Hessian with Gauss-Newton?** **A5:** While the theory advocates using Hessian for Laplace approximation, practical experience suggests substituting it with the Gauss-Newton matrix [8, 37]. Dosing so, the information in the probability curvature mostly remains. Besides, the Gauss-Newton matrix is positive semi-definite and can be more easily manipulated in practice. **Q6: Missed citation in L131** **A6:** Thanks for the suggestion. We’ll fix this issue in the next version. **Q7: What does Table 4 in the Appendix refer to?** **A7:** It reports the results of our method on the *entire* training set of WebVision (see L195: we use only half of the training set for training in the main experiments for a fair comparison with RHO-LOSS [31]). We see more significant speedups and higher final accuracy due to more training data. --- Rebuttal Comment 1.1: Title: Thank you for your responses Comment: **Q1**: Regarding the point of “superset” dataset, the web-scale collection of image-text pairs almost certainly is exposed to images of CIFAR10 categories (airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks). The authors can try to disprove this. The more CLIP, BLIP and SWAG are trained in larger-scale in-the-wild datasets, the more difficult it is to claim zero-shot generalization to ood data. This is not up to the authors to disprove/approve, they just have to provide a convincing enough ablation, that in light of this their method still provides with further benefits. Thank you for the linear probing experiments, I would have been more convinced if you provided a *finetuned version* of the R50-CLIP network instead. The second experiment, however, provides with enough ablation against RHO-Loss. **Q2**: Please be clear about the validation protocol, it is important to clarify (if not address with the same resources as training). **Q7**: Please clarify in the appendix as well Table 4, since there is no text accompanying it. Thank you for answering the rest of my questions, I raise my score to 7. --- Reply to Comment 1.1.1: Title: Thank you for your feedback Comment: We appreciate your detailed comments and suggestions. Regarding the zero-shot predictor, we totally understand what you're trying to convey. As suggested, we provide a further baseline—the finetuned version of the R50-CLIP, with the results detailed below: | | CIFAR-10 | CIFAR-10* | CIFAR-100 | CIFAR-100* | | ------------------------------------ | -------- | --------- | --------- | ---------- | | Fine-tuning R50-CLIP with uniform sampling | 87.6 | 85.3 | 59.1 | 57.6 | | Proposed | 91.4 | 91.3 | 63.3 | 61.4 | We will incorporate your suggestions on validation protocol and Table 4 in the final revision and continually polish our paper. Thank you again!
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Enhancing CLIP with CLIP: Exploring Pseudolabeling for Limited-Label Prompt Tuning
Accept (poster)
Summary: This paper is about the exploration of using pseudolabels for VLMs on different downstream tasks. Specifically, the authors experimented CoOp, VPT and UPT with pseudolabels generated from CLIP on semi-supervised learning, transductive zero-shot learning and unsupervised learning. They proposed three training strategies, namely FPL, IFPL and GRIP, where the difference is mainly at whether the pseudolabels are static or dynamic. Experiments on 6 datasets show significant improvement. Strengths: - Writing is good. The work is presented in a logical way and is easy to follow. - Performance improvement is impressing. - Experiments are thorough. Weaknesses: - Novelty is limited. The use of CLIP's pseudolabels are not new and the proposed training strategies are also widely used in self-training methods. - I agree with the authors' statement that "If CLIP performs poorly on a task, we may struggle to obtain a reliable set of pseudolabels to begin with, potentially diminishing CLIP’s performance." Therefore, it is unintuitive why GRIP achieves such a big improvement even on datasets like MNIST where CLIP fails. One explanation I can think of is that the quality of the pseudolabels are drastically increasing as the training proceeds. However, results from Fig 3 seems to show the opposite. I am mainly concerned about this and believe further analysis is needed to explain what is the exact reason for the success of the proposed method. - I'm also concerned with the computational cost since a total of 10 iterations are used. Authors should include this when comparing with other methods. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - The authors stated that the proposed IFPL and GRIP are "novel and proposed here for the first time for prompt tuning". However, aren't these two methods similar in spirit to self-training? What are the differences? - According to Fig 3, the accuracy of pseudolabels are only marginally increasing for IFPL, and decreasing for GRIP as the training proceeds. Is it necessary then to train over 10 iterations? What are the detailed improvements for each iteration? - Why is it that CLIP's pseudolabel accuracy decreases in Fig 3? Shouldn't zero-shot CLIP remain an approximately constant performance irrespective of the number of iteration and the number of unlabeled data? - When using a larger image encoder, why does the performance decrease? If a larger backbone is used, wouldn't the quality of pseudolabels be higher? This further deepens my concerns on the proposed method. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Yes, the authors have addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback. Below we address all your questions. > Novelty is limited. The use of CLIP's pseudolabels are not new and the proposed training strategies are also widely used in self-training methods. We respectfully disagree that novelty is limited. Our study centers on conducting a comprehensive exploration of a wide design space that includes prompt modalities, learning paradigms, and training strategies. As shown in Figure 1, this design space has been an underexplored topic in the literature. If there are other papers that have considered these choices in the context of vision-language models like CLIP, we would be grateful for pointers. As also discussed above in the general comment, this exploration leads to novel and useful findings about prompt tuning across a wide range of settings like UL and TRZSL including improved performance and the Robin Hood effect. > I agree with the authors' statement that "If CLIP performs poorly on a task, we may struggle to obtain a reliable set of pseudolabels to begin with, potentially diminishing CLIP’s performance." Therefore, it is unintuitive [...] I am mainly concerned about this and believe further analysis is needed to explain what is the exact reason for the success of the proposed method. The concerns raised by the reviewer about MNIST can be clarified in Figure 6 of the Appendix. As the plots display, for SSL and UL the accuracy of GRIP pseudolabels is pretty high and maintained throughout the iterations. Moreover, its value is close to the accuracy of MNIST reported in Table 1. As for TRZSL, the accuracy is close to the one of CLIP and in absolute value reflects the overall accuracy reported in Table 1. We explain the fact that GRIP accuracy is higher than CLIP, even with comparable pseudolabels accuracy, with the trade-off between quality and quantity that we extensively cover in Section 4.1, page 7, and the Appendix. > I'm also concerned with the computational cost since a total of 10 iterations are used. Authors should include this when comparing with other methods. And > According to Fig 3, the accuracy of pseudolabels are only marginally increasing for IFPL, and decreasing for GRIP as the training proceeds. Is it necessary then to train over 10 iterations? What are the detailed improvements for each iteration? Repeating the training process multiple times brings impressive improvements at the cost of a non-negligible increase of computation time. While we parallelized the pseudolabeling procedure to cut some of the cost, reducing those for iterative training presents more significant challenges. We decided to mainly focus on the analysis of qualitative and quantitative effects of pseudolabels in prompt tuning, as overviewed in Figure 1. Future research should address budget constraints and, as suggested by the reviewer, investigate optimal stopping criteria for the iterative process, considering the possibility of reaching a plateau or decreased pseudolabel quality after a certain point, to maximize efficiency while maintaining performance. Due to the importance of the topic, we will add a discussion of computational time and cost in the limitations section of the paper. > The authors stated that the proposed IFPL and GRIP are "novel and proposed here for the first time for prompt tuning". However, aren't these two methods similar in spirit to self-training? What are the differences? IFPL and GRIP are novel in the context of prompt tuning and CLIP-based pseudolabels since they had not been previously investigated. However, it is fair to say that they are similar in spirit to the common self-training approach. The key distinction is the pseudolabel assignment method: using a top-K strategy instead of a confidence threshold. Despite the subtlety of this difference, it significantly impacts the final prediction’s quality. Our experiments demonstrate that prompt tuning with IFPL and GRIP achieves a more equitable distribution of class accuracies. If considered misleading, we can replace “novel” with “unexplored”. > Why is it that CLIP's pseudolabel accuracy decreases in Fig 3? Shouldn't zero-shot CLIP remain an approximately constant performance irrespective of the number of iteration and the number of unlabeled data? When we increase the amount of pseudolabeled data, the accuracy of CLIP is not necessarily supposed to remain constant. As we increase K, we are effectively selecting pseudolabels with lower similarities to the classes, resulting in a reduction in their accuracy, as shown in the plot. This observation aligns with previous findings in [15]. > When using a larger image encoder, why does the performance decrease? If a larger backbone is used, wouldn't the quality of pseudolabels be higher? This further deepens my concerns on the proposed method. We would like to clarify that when using a larger encoder, GRIP still substantially improves the performance across all learning paradigms. The smaller relative improvements with respect to smaller visual encoders align with our expectations. Larger encoders possess a stronger base knowledge, making it relatively more challenging to attain further improvements on top of it. In Table 10 (additional PDF for rebuttal, which we will add to an appendix), we report the accuracy of CLIP with different backbones. To clarify, the performance of the larger backbone is higher, indicating higher quality pseudolabels as the reviewer says. Table 3 is included to show that techniques like GRIP generalize across backbones, but it is unsurprising that the gains are a bit smaller with a larger backbone because it is a stronger starting point. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Dear authors, Thank you for the feedback! Most of my concerns were addressed (e.g., performance on MNIST and with a stronger backbone). I've also carefully read other reviewers' comments and the authors' responses. However, since pseudolabeling of CLIP is already widely explored in the unsupervised setting, I still feel that extending such methodology to SSL and TZSL is a marginal step. The experiments are good, but I would prefer seeing something new in the design. Therefore I would remain my score. --- Reply to Comment 1.1.1: Comment: Thank you for your response, we are glad your main concerns have been solved. For the sake of the final discussion among reviewers, we provide some clarifications about the goals of our work. > “since pseudolabeling of CLIP is already widely explored in the unsupervised setting” In our design space, for unsupervised learning we define 6 paths to explore, among which only 1 was previously investigated in the literature [15]. If there are other papers that have considered these choices in the context of vision-language models like CLIP, we would be grateful for pointers. > “I still feel that extending such methodology to SSL and TZSL is a marginal step” We respectfully disagree with the reviewer. We do not claim to propose or extend methodologies to work in the SSL and TRZSL settings, rather we extensively explore how pseudolabeling adapts to these settings in the context of prompt tuning with CLIP. The exploration of our underexplored design space is based on the observations that (1) VLMs' zero-shot capabilities extend the usability of pseudolabels to any limited-label data scenario, and (2) paradigms such as semi-supervised, transductive zero-shot, and unsupervised learning can all be seen as optimizing the same loss function, by using zero-shot pseudolabels as a source of supervision. We consider these two observations, and the experimental findings that generalize across these settings, as important takeaways from this paper. > “The experiments are good, but I would prefer seeing something new in the design” The goal of this paper is to study how to use pseudolabels with CLIP in a variety of unexplored learning settings. We do not propose new methodologies to use pseudolabels with CLIP. We explored 27 combinations of prompt modalities, learning settings, and training strategies, among which only 1 was already explored. We consider filling this gap in literature a significant contribution that shows useful takeaways to the community. In this context, our experiments not only demonstrate the effectiveness of using pseudolabels iteratively for prompt tuning CLIP in limited labeled scenarios, but also show that prompt tuning with pseudolabels can mitigate the biases of the original model. Reviewers generally appreciated our contributions: * Reviewer sr7c says: “ The authors provide compelling evidence that underlines the power of a repetitive prompt-training approach, which leverages CLIP-based pseudo labels. Regardless of the learning model (SSL, TZSL, UL) or the type of prompt (text, visual), this strategy significantly enhances the image classification capabilities of CLIP across multiple settings. [..] effectively addressing the inherent bias of CLIP pseudo labels.” * Reviewer kGE6 says: “ agreeing with the position of Reviewer 1u3c, that it is a necessary milestone in the research community for under-explored CLIP pseudo-labeling which can stimulate other works and deeper understanding of pseudo-labeling algorithm aspects.” * Reviewer qyTk: “ Performance improvement is impressing. Experiments are thorough”
Summary: The authors provide compelling evidence that underlines the power of a repetitive prompt-training approach, which leverages CLIP-based pseudo labels. Regardless of the learning model (SSL, TZSL, UL) or the type of prompt (text, visual), this strategy significantly enhances the image classification capabilities of CLIP across multiple settings. Moreover, by using the Top-K pseudo-labeling approach, they ensure a balanced distribution of pseudo-labeled training samples for each class, thereby effectively addressing the inherent bias of CLIP pseudo labels. Strengths: - Various learning paradigms, including semi-supervised, transductive zero-shot, and unsupervised learning, can all be viewed as unique instances of a single objective function when pseudolabels are used as a form of guidance. - As evidenced by Table 1and Table 2, the experimental results demonstrate a significant boost in performance, indicating that the proposed pseudo-labeling strategy holds considerable merit. Weaknesses: The overall structure or layout of the manuscript needs to be further refined. - Table 5 ("There is a trade-off between quality and quantity of pseudolabels") is mentioned in L290, but Table 5 does not exist in the main manuscript. - Table 2 was not mentioned at all in the body text. - Since Fig 3 and Fig 5 are too small to be seen properly, efforts are needed to improve readability. - The formulas of L157-L158 should be specified as an equation in the form of (1), but it was not. Methodologically, the proposed approach is too naive. - Setting the trade-off parameters in unified objective function (L157-L158), e.g., gamma, and lambda, is too heuristic, and there is lack of the analysis. - Selecting the Top-K most confident samples by class is too naive. It would be nice to select a confident sample by improving class diversity through the application of techniques such as information maximization loss [1]. And it would be good to present comparison results with other attempts for class diversity. [1] Liang, Jian, Dapeng Hu, and Jiashi Feng. "Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation." ICML 2020 Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - In Table 1 and Table 2, there's a noticeable drop in some experimental results under the UL setting. I'm curious as to the cause of this decrease. - Within Table 2, the outcomes for the TRZSL learning setting in RESICS45 and DTD, under the "Multimodal prompts" category, show a decline. I'm interested in understanding why this is so. - In the experimental results presented in Table 3, where "GRIP benefits adaptation even for larger image encoders," there seems to be a reduced rate of improvement when using a larger image encoder. I wonder the reason or analysis behind this observation. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: I already pointed out in the weakness part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback. Below we address all your questions. About the layout and typos we will address all the points in the final version of the paper. > Setting the trade-off parameters in unified objective function (L157-L158), e.g., gamma, and lambda, is too heuristic, and there is lack of the analysis. Indeed the trade-off parameters we defined are heuristic, but they have proven to be effective across all 162 combinations of tasks, prompt modalities, learning paradigms, and training strategies that we extensively investigated. Although having a theoretical analysis is always desirable, it is worth emphasizing that empirically setting hyperparameters to balance the terms of a loss function is the common practice in machine learning. Often, hyperparameter optimization can be resource-intensive, whereas in our approach, equally weighting labeled and pseudolabeled data yields remarkable improvements and obviates the need for an exhaustive hyperparameter search. > Selecting the Top-K most confident samples by class is too naive. It would be nice to [..] other attempts for class diversity. We respectfully disagree in considering simplicity a weakness, especially after observing the quantitative (impressive performance improvement) and qualitative advantages (Robin Hood effect) of using the top-K approach for pseudolabeling. Since we set our goal to explore how to use CLIP’s pseudolabels in a variety of low-resource settings for prompt tuning, we deliberately decided to focus on the design space overviewed in Figure 1. Using more complex procedures for other aspects of the experiments risks confounding the results. > In Table 1 and Table 2, there's a noticeable drop in some experimental results under the UL setting. I'm curious as to the cause of this decrease. The drops in performance under the unsupervised learning (UL) setting typically correspond to a poor initial assignment of pseudolabels, perpetuated through the iterative process. Indeed, in UL pseudolabeled data is the only source of supervision. However, this phenomenon can also be attributed to the prompt modality, as we can observe improvements under the same setting where a prompt of different modality is trained. > Within Table 2, the outcomes for the TRZSL learning setting in RESICS45 and DTD, under the "Multimodal prompts" category, show a decline. I'm interested in understanding why this is so. We speculate that during the first training iteration, the learned prompts produce low-quality pseudolabels, leading to suboptimal learning in subsequent iterations. In contrast, GRIP - which increases the set of pseudolabels over time - significantly outperforms the baseline in these cases. Understanding why this happens for multimodal prompts but not for textual or vision prompts is challenging. In the literature, the behavior of different prompt modalities is typically associated with the task’s characteristics [44]. However, the prompt tuning literature lacks a definitive scientific consensus on the effectiveness of one modality over another. > In the experimental results presented in Table 3, where "GRIP benefits adaptation even for larger image encoders," there seems to be a reduced rate of improvement when using a larger image encoder. I wonder the reason or analysis behind this observation. The smaller relative improvements with respect to smaller visual encoders align with our expectations. Larger encoders possess a stronger base knowledge, making it relatively more challenging to attain further improvements on top of it. In Table 10 (additional PDF for rebuttal, which we will add to an appendix), we report the accuracy of CLIP with different backbones. To clarify, the performance of the larger backbone is higher, indicating higher quality pseudolabels. Table 3 is included to show that techniques like GRIP generalize across backbones, but it is unsurprising that the gains are a bit smaller with a larger backbone because it is a stronger starting point. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thanks for addressing my feedback; I'm inclined to increase my score from 4 to 6 in favor of the paper's acceptance. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback and the support! We will include all the necessary clarification in the final version of the paper!
Summary: In this paper authors extend empirical study of self-training for the case when pseudo-labels are generated by models (CLIP) in zero-shot regime (models are trained on unlabeled data with respect to a downstream task, but can be used for zero-shot prediction for the downstream task). Authors investigate self-training in case of visual-language models (CLIP) with tuning visual/text/visual-text prompts with pseudo-labels generated in the zero-shot regime by CLIP model. They formulate also different training regimes (unsupervised, semi-supervised, transductive zero-shot) in terms of simpler supervised training where part of data (or all) just has pseudo-labels. Also it is investigated how we could bootstrap further performance in self-training doing rounds of training while updating the pseudo-labels. In the end authors show that self-training is effective and improve across different settings and datasets. Interesting results are obtained in terms of debiasing original CLIP model when self-training is done afterwards. Strengths: - extensive empirical study of self-training for the CLIP-based models and various configurations and settings - demonstrating that CLIP-style models can be used as seed model for self-training in variety of settings bringing significant boost for prompt finetuning - empirical demonstration how CLIP de-biasing can be done via self-training Weaknesses: - absence of any ablations on balancing between labeled / unlabeled data - absence of ablation on the number of samples selected for every class in self-training (why K=16? is it important? why balanced across classes?), also it is not clear what is the effect of balanced selection here especially on the de-biasing the CLIP (if de-biasing happens only because of this balancing then it is very obvious/expected result in my opinion) - GRIP exact explanation / how it works should be extended in the text (many ambiguities in the current description) - Table 1 and other Tables: I don't get why for CLIP TRZSL is similar to SSL and UL while for self-training in any combination TRZSL becomes way better than SSL and UL? - Missed discussion on the results that text prompt alone gives superior results compared to text+visual prompt or visual-alone prompt **Summary** There is no novelty or any impactful design of the self-training algorithm itself, though extensive empirical study is done for zero-shot learners (CLIP) used to pseudo-label data. The particular choice of hyper-parameters (like balancing pseudo-labels per class) demonstrate significant improvement for the prompt fine-tuning task as well as de-biasing original model (latter I found particularly important and interesting). Technical Quality: 3 good Clarity: 3 good Questions for Authors: - lines 50-53: I would not agree that VLM are trained on unlabeled data. We do mining of audio-visual pairs, so in that sense it is really unlabeled. On the other hand, yes, downstream task is different and we can use zero-shot inference to generate pseudo-label. However one could consider that task of pairing text and audio is more general than pairing class name and image. I would be here more concrete in phrasing. - There are a lot of repetitions of the same sentences / phrases along the text, e.g. lines 60-69 (other - lines 134-144) appear couple of times. I would reorganize the text to refer to the previous parts rather than repeat almost same ways in introduction / results / sections. - typo line 96: "such as such as" - lines 179 and 187 - selection of $gamma$ and $\alpha$ seems to be equivalent providing 1:1 balancing between labeled / unlabeled data. I don't get why then expression are made different if in the end mathematically optimization is the same. - IFPL: what happens if for some class we don't have K samples? say no any examples are predicted as class 0 for MNIST, what happens then? - How about IFPL variant but when we increase K over time, still taking top only pseudo-labels? This is I assume is different from the GRIP. - It is not clear if in GRIP classes for pseudo-labels are still balanced, and also how do we select which part of unlabeled set to take? Do we take every iteration top pseudo-labels increasing the number of them or we just increase randomly the data size and take whatever pseudo-labels will be there? Do we still re-initialize prompt every iteration here? Do we reuse or regenerate pseudo-labels for the part of data used in previous iteration here? - Why we don't train continuously prompt when we re-generate pseudo-labels? Why do we not fine-tune the whole model also but only prompting (if for efficiency purpose - ok, but interesting to see if we need to finetune the whole model)? - lines 308-311 - this is actually known fact. E.g. for fixmatch/remixmatch we do different augmentations, and they are the key to make self-training to work, so particular type of noise in data/labels make self-training to work. - Figure 3 (and similar in Appendix) is not clear. I got what is x-axis only after reading it several times. Either better notation or better caption is needed. Maybe "refers to the top x-axis (number of iterations) while .. to the bottom a-axis (amount of unlabeled data)". - I still don't get why GRIP over iterations can become worse than IFPL. Also maybe it is worth to have ablation where we do IFPL but for every iteration K is increased so that we increase amount of unlabeled data involved though we do this over the most confident every time still. - Robin Hood effect: is it because we do balancing of classes in self-training? I guess here this plays a huge role, assuming that some classes are unrepresented in CLIP pretraining + we know that in self-training different classes have different pace of learning, so that balancing can resolve issues on under representative classes or hard ones. - line 345 - what is "conventional pseudo-labeling"? - Table 5: typo - UPL -> UPT - lines 664-665 - is it because of the CLIP bias itself and the way classes are balanced in fine-tuning? - lines 692-693 - maybe not surprising as we balanced examples per class? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are discussed after conclusion sections. Formulation sounds reasonable to me. **Update: after rebuttal and discussion with authors the score is update from 6-weak accept to 7-accept and contribution from 2 to 3.** Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback. Below we address all your questions that space allows. We do not seem to be able to reply to our own rebuttal to answer more. Please reply if you would like us to answer the remaining questions in our own reply. > Absence of any ablations on balancing between labeled / unlabeled data We started experimenting without balancing labeled and unlabeled data. This yielded unsatisfactory results since the training excessively focused on either of the two sets of data. So, we tried balancing the two. This strategy turned out to robustly work for all the 162 combinations of datasets, prompt modalities, learning settings, and training strategies we explored. We observed impressive quantitative and qualitative performance improvements, while eliminating the need for a hyperparameter search to optimize results. We decided not to confound the conclusions with balance optimization due to our focus on the design space overviewed in Figure 1. > Absence of ablation on the number of samples selected for every class in self-training (why K=16? is it important? why balanced across classes?) We set K=16 since it is indicated as the optimal K in the previous research on pseudolabeling with CLIP [15]. In general, K is an hyperparameter that that may require optimization in practical cases. We decided to be consistent with the literature and applied this fixed value of K in order to focus on the design space overviewed in Figure 1. The balance across classes also derives from [15]. About that, we note that this is an easy and effective way to avoid pseudolabels distribution skewed toward certain classes. > Table 1 and other Tables: I don't get why for CLIP TRZSL is similar to SSL and UL while for self-training in any combination TRZSL becomes way better than SSL and UL? CLIP's performance similarity across TRZSL, SSL, and UL is due to evaluation on the same test set. While SSL and UL get the same scores, TRZSL accuracy differs since it is the harmonic mean between seen and unseen class accuracies. This helps recalibrate the overall score, particularly when the model performs poorly on one set of classes. Observing significantly larger scores for TRZSL compared to SSL and UL is expected. This difference arises because in TRZSL, a portion of the target classes is provided with labeled data, which contributes to a larger ground truth being available to the model compared to SSL and UL settings. > Missed discussion on the results that text prompt alone gives superior results compared to text+visual/visual prompt Using pseudolabels dynamically is beneficial for each modality. However, determining the clear superiority of one prompt modality over the other is challenging, as it depends on the specific tasks. For example, visual prompts work better for EuroSAT, while textual prompts excel in Flowers102. Despite intuitive explanations (Section 3.1), the scientific consensus remains elusive [44]. Hence, we prefer to emphasize that the dynamic use of pseudolabels consistently improves performance for each prompt modality, without declaring one modality as definitively better than the other. > The selection of lambda and gamma seems to be equivalent providing 1:1 balancing between labeled / unlabeled data. I don't get why then expression are made different if in the end mathematically optimization is the same. Yes, they are equivalent. We will simplify the presentation and just leave the one weighting the unlabeled data. > IFPL: what happens if for some class we don't have K samples? say no any examples are predicted as class 0 for MNIST, what happens then? The top-K pseudolabeling strategy we adopt, originally studied in [15], consists of (1) computing the similarity scores of each datapoints with classes’ textual prompts, and (2) select for each class the K datapoints with the highest similarity score to the class. In this way, we always get K pseudolabels per class. Of course, it can happen that if K is too large and the unlabeled dataset is smaller than K*|number of classes|. In this case, we suggest reducing K. > How about IFPL variant but when we increase K over time, still taking top only pseudo-labels? This is I assume is different from the GRIP. Based on our understanding, it appears that the variant of IFPL proposed by the reviewer is the same as GRIP. In IFPL, top-K pseudolabels are recomputed at each iteration while keeping K fixed. In contrast, in GRIP, K increases with each iteration and we assign pseudolabels still following the top-K schema. > It is not clear if in GRIP classes for pseudo-labels are still balanced, and also how do we select which part of unlabeled set to take? [...] Do we reuse or regenerate pseudo-labels for the part of data used in previous iteration here? GRIP maintains class balance by selecting the top-K samples at each iteration, with K increasing progressively. Similar to IFPL, both prompts and pseudolabels are reinitialized with every iteration, in order to avoid accumulating errors from earlier iterations. In other words, learning progresses from pseudolabels to new prompts to new pseudolabels, and so on. > Why we don't train continuously prompt when we re-generate pseudo-labels? Why do we not fine-tune the whole model [...]? We reinitialized prompts at each iteration to avoid error accumulation from previous steps. The focus of this work is on parameter-efficient adaptation of CLIP, thus we did not try out the use of pseudolabels to fine-tune the entire model. Additionally, it is worth noting that the limited amount of initial training data might not be sufficient to meaningfully fine-tune such a large model. We will clarify this point in the revised version of the paper. > I still don't get why GRIP over iterations can become worse than IFPL. [...]. As reported in Table 2, GRIP either outperforms or performs comparatively with IFPL. The suggestion of the reviewer is correct and it corresponds to GRIP. --- Rebuttal Comment 1.1: Title: Reviewer's response to rebuttal Comment: Dear authors, Thanks a lot for detailed clarifications. I have carefully read all reviews and your responses. I strongly suggest you to incorporate all main points of the discussion into the final revision (e.g. the choice of balancing and top-k of 16). I still have some questions and comments to clarify: > Absence of any ablations on balancing between labeled / unlabeled data Ok, I buy your arguments here. But I think you should include this into text to provide for the future as reference of observations. > Absence of ablation on the number of samples selected for every class in self-training (why K=16? is it important? why balanced across classes?) As you have reference to prior work - nice to see it still works confirming robustness of prior work choice. Add this to the text as it reasons on the choice you made. > Table 1 and other Tables: I don't get why for CLIP TRZSL is similar to SSL and UL while for self-training in any combination TRZSL becomes way better than SSL and UL? I think this clarification should be included in the final revision to make readability clear. > (2) select for each class the K datapoints with the highest similarity score to the class. Could then we end up having some samples presented in data with two different assigned pseudo-labeled classes? > Based on our understanding, it appears that the variant of IFPL proposed by the reviewer is the same as GRIP. In IFPL, top-K pseudolabels are recomputed at each iteration while keeping K fixed. In contrast, in GRIP, K increases with each iteration and we assign pseudolabels still following the top-K schema. > GRIP maintains class balance by selecting the top-K samples at each iteration, with K increasing progressively. Similar to IFPL, both prompts and pseudolabels are reinitialized with every iteration, in order to avoid accumulating errors from earlier iterations. In other words, learning progresses from pseudolabels to new prompts to new pseudolabels, and so on. From description in line 223 I do not agree with your statement. "we use i/I -th of the unlabeled data" - there is nothing about top-K where K is growing. You need to be more precise in the description of GRIP in the text for the revision. But thanks for the explanation, now it makes sense to me :) (And general response answers entirely my concern here). Any comment on > Robin Hood effect: is it because we do balancing of classes in self-training? I guess here this plays a huge role, assuming that some classes are unrepresented in CLIP pretraining + we know that in self-training different classes have different pace of learning, so that balancing can resolve issues on under representative classes or hard ones. ? > As reported in Table 2, GRIP either outperforms or performs comparatively with IFPL. The suggestion of the reviewer is correct and it corresponds to GRIP. Here I look e.g. at Fig 6 in Appendix where accuracy drops over iterations for GRIP but not for IFPL. I found this opposite to the expected behaviour where GRIP should improve things over iterations as we become more confident (maybe this shows overfiting). Do you report last iteration performance for GRIP or the best in Table 2? Thanks, Reviewer kGE6. --- Reply to Comment 1.1.1: Comment: Thank you again for all the suggestions. They are all valuable and clarifications will certainly be included in the final version of the paper. Below, we answer further questions and post the answers we wrote and left out because of space constraints. > (2) select for each class the K datapoints with the highest similarity score to the class. [..] Could then we end up having some samples presented in data with two different assigned pseudo-labeled classes? Yes, this can happen. However, we checked the pseudolabels at each iteration and it was rarely the case. This is a characteristic of the pseudolabeling strategy proposed in [15]. We believe this can be an object of study for future work motivated by the effectiveness of self-training. > Robin Hood effect: is it because we do balancing of classes in self-training? I guess here this plays a huge role, assuming that some classes are unrepresented in CLIP pretraining + we know that in self-training different classes have different pace of learning, so that balancing can resolve issues on under representative classes or hard ones. In our paper, we conducted a comprehensive investigation into the causes of the Robin Hood effect, which results in a more balanced distribution of class accuracies (Section 4.2). Our study revealed that the combination of the top-K pseudolabeling strategy and prompt tuning plays a crucial role in achieving this effect. We hypothesize that the parameter-efficient nature of prompt tuning also helps avoid overfitting to the easier classes. Surprisingly, this straightforward and intuitive approach, aimed at balancing classes, has not been explored in previous works that address imbalanced accuracy distributions, as evidenced in references [49,8]. While these studies emphasized the importance of examining class accuracies when employing pseudolabels in semi-supervised learning, we extend this analysis to the application of prompt tuning and CLIP-pseudolabeling. By doing so, we present a novel perspective on this phenomenon and its practical implications. > Here I look e.g. at Fig 6 in Appendix where accuracy drops over iterations for GRIP but not for IFPL. I found this opposite to the expected behaviour where GRIP should improve things over iterations as we become more confident (maybe this shows overfiting). Do you report last iteration performance for GRIP or the best in Table 2? We report the performance of the last iteration. The behavior of GRIP’s pseudolabels accuracy surprised us too. That is why we emphasize it by discussing the trade-off between the quantity and quality of pseudolabels (Sect 4.1, page 7). Although the accuracy of GRIP decreases we can still observe that it is higher than the accuracy of CLIP on the same amount of data. Moreover, the accuracy of pseudolabels at the last iteration is close to the overall accuracy of GRIP in Table 1 and 2. We speculate that one of the causes of the deterioration of psuedolabels accuracy could be overfitting, but we do not have evidence to entirely support this claim and leave the question open for further explorations. > lines 308-311 - this is actually known fact. E.g. for fixmatch/remixmatch we do different augmentations, and they are the key to make self-training to work, so particular type of noise in data/labels make self-training to work. In lines 308-311: we say “This suggests that numerous, slightly noisier pseudolabels can yield better results, highlighting a trade-off and offering insights for future approaches.” We clarify that the word choice of “noisier” is misleading and led the reviewer to think we are referring to augmentations. Instead, with “noisier” we refer to incorrectly assigned pseudolabels. Specifically, in that sentence we explain the trade-off between quality and quantity of pseudolabels. To avoid confusion we will replace the word “noisier” with “incorrect”. > lines 692-693 - maybe not surprising as we balanced examples per class? The class balance is imposed on both methods we compare. However, the class accuracy distribution of CLIP via linear probing shows a 30 times larger reduction in the accuracy of rich classes. Lines 692-693 comment on the class accuracy distribution obtained running GRIP with linear probing. While the class balance plays a central role to determine the Robin Hood effect, in this context we were commenting that with linear probing the reduction of accuracy of rich classes is on average 30 times larger than the reduction observed using prompts. > lines 664-665 - is it because of the CLIP bias itself and the way classes are balanced in fine-tuning? In the lines highlighted, we observe that the accuracy of GRIP on seen classes is worse than CoOp. During training, for large lambda, the loss component of unlabeled data (unseen classes) is the first to decrease, while the loss on the seen classes reduces later. Thus, we hyphotize that extra training steps might be needed to complete the learning on the labeled data.
Summary: The paper explores the use of CLIP for pseudo-labeling for various tasks such as SSL and on various datasets. Overall - while not being very surprising - the results can clearly outperform prior work (that is not using CLIP it has to be said) - and thus showing the potential of CLIP for such tasks. Strengths: Clearly, the authors show what they set out to do: CLIP pseudo-labeling can improve via pseudo-labeling on a variety of tasks and datasets. That is indeed nice to see. The positive experiments are good - doing this for a variety of tasks and on a variety of datasets. Weaknesses: While the improvements are good - they are not surprising either. CLIP is trained on a wide variety of data and thus it is clear that pseudo-labeling using CLIP should help in many cases In my view the main weakness of the paper is that it is not showing the limits of CLIP pseudo-labeling - while most of us do not know which exact data is used to train CLIP - the datasets used seem still reasonably close. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Overall the paper has a clear point and it does a good job making that point reasonably clear. As said above, after reading the paper I am not surprised that this can work, but I am left without an answer what the limits are of CLIP pseudo-labeling, Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: limitation discussion ok for me - except the point of the limits of CLIP pseudolabeliing as mentioned above Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback. Below we address all your questions. > The results can clearly outperform prior work (that is not using CLIP it has to be said) - and thus showing the potential of CLIP for such tasks. We would like to clarify that the baselines and comparisons described in LL237-242 are using CLIP. The wide range in performances show the benefits of pseudolabeling and the importance of the choices within the design space we explore. > While the improvements are good - they are not surprising either. CLIP is trained on a wide variety of data and thus it is clear that pseudo-labeling using CLIP should help in many cases Because of the remarkable zero-shot ability of CLIP, we agree about the possibility of getting sufficiently good pseudolabels. However, we respectfully disagree about the clear benefit we can derive from them. Indeed, in our paper we show that the naive and static usage of top-K pseudolabels per-class, as proposed in [15], does not fully unlock their potential. On the contrary, dynamic training strategies show good improvements. > In my view the main weakness of the paper is that it is not showing the limits of CLIP pseudo-labeling while most of us do not know which exact data is used to train CLIP - the datasets used seem still reasonably close. We agree that exploring the limitations of a method is also important. In this work, we were surprised to find that pseudolabeling can be effective for improving CLIP on such a wide range of tasks. We ran our experiments on domain-specific tasks where CLIP exhibited poor performance, primarily attributed to domain shift issues, as highlighted by CLIP's authors [31, Section 3.1.5, and Figure 5]. It was surprising to see that pseudolabels can bring improvements even in cases where CLIP has low baseline performance, e.g., EuroSAT, DTD, and FGVCAircraft. We know that pseudolabeling will fail if CLIP’s initial performance is not much better than random guessing or adversarially biased. We have discussed this in the paper’s limitations section. But given the range of datasets and design choices already considered, we had to leave searching for additional datasets on which it fails for future work. Perhaps datasets with very different sensors, like medical imaging, would be even more challenging. --- Rebuttal Comment 1.1: Comment: After reading all reviews and the authors' rebuttal I personally still lean towards acceptance of the paper. While I agree that one could argue that methodologically there is not much novelty, I still consider it worth reporting given the reported improvements and the fact tat CLIP pseudolabeling is still rather under-explored. I also fully agree with on of the other reviewers that the authors should included the discussions and clarifications mentioned in the reviews and rebuttal to strengthen the paper overall. --- Reply to Comment 1.1.1: Comment: We are glad the reviewer appreciated our work and emphasized the paper's contribution as extending beyond methodological novelty. Thank you again for your review. We will make sure to add valuable discussions and clarifications in the final version of the manuscript.
Rebuttal 1: Rebuttal: We thank all the reviewers for their time and valuable feedback. Addressing your concerns during the discussion phase will significantly enhance the paper. We clarify common questions here and address your reviews individually below. ### Novelty Review qyTk expressed concern about the novelty of our training strategies using CLIP-based pseudolabels for prompt tuning. However, our paper's core novelty lies in the extensive exploration of an underexplored design space for prompt tuning with CLIP, which has been generally appreciated by all reviewers. The design space includes prompt modalities, learning paradigms, and training strategies (Figure 1), that define 27 possible paths to explore, among which only 1 was previously investigated in literature [15]. We applied the same versatile training strategies to all the settings, based on the observations that (1) VLMs' zero-shot capabilities extend the usability of pseudolabels to any limited-label data scenario, and (2) paradigms such as semi-supervised, transductive zero-shot, and unsupervised learning can all be seen as optimizing the same loss function, by using zero-shot pseudolabels as a source of supervision. While pseudolabeling with CLIP and pseudolabeling in general are not new, our perspective brings novel and meaningful contributions, particularly in the applicability of pseudolabels and their use to tailor CLIP for specialized domains with limited labeled resources. ### Naivety While in our analysis we vary the prompt modalities, learning paradigms, and training strategies, we do not vary the rule for assigning pseudolabels and stick to the top-K strategy [15]. Reviewer sr7c criticized the usage of a too-naive rule. Although more sophisticated pseudolabeling and training strategies have been devised, we deliberately decided to stick to simple methods and have a better focus on the the three dimensions of the design space overviewed in Figure 1. It is impressive to see how this simple strategy leads to quantitative (impressive performance improvement) and qualitative advantages (Robin Hood effect), recognized by all reviewers. ### Clarification on training strategies Reviewers Qe43 and kGE6 suggested to improve the clarity of the descriptions of iterative strategies (IFPL and GRIP). In the manuscript, we will do that by adding explicit step-by-step explanations. For IFPL, we begin by obtaining the top-K pseudolabels for each target class. These pseudolabels are then used to train a new task-specific prompt. After completing the training, we use the acquired prompt to compute the top-K pseudolabels per class again. Subsequently, we reinitialize the prompt and repeat this entire process for a total of I iterations. As for the GRIP method, it shares similarities with IFPL, but with a key difference. In each iteration, we progressively increase the value of K. Specifically, during the ith iteration, we use K= i/I-th of the unlabeled data to perform the steps in the iterative process. ### Typos We did our best to proofread the paper before the submission. However, we missed some minor typos which were highlighted by the reviewers, e.g., Table 5 in L290 should be Table 2. We resolved them. Thank you for pointing them out. Pdf: /pdf/0ddd3c12b5289944b03881b426dd9fada80d7858.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper explores the concept of prompt tuning in the context of limited labeled data. The authors propose a unified objective function that encompasses three different learning paradigms, and investigate three distinct training strategies for leveraging pseudolabels. The experimental results on six datasets demonstrate the effectiveness of the proposed GRIP strategy. The findings of this study highlight the value of utilizing pseudolabels generated by CLIP itself to enhance its performance in diverse learning scenarios. Strengths: * this paper is well-written with a clear structure, making it easy to understand * this paper extensively explores a board design space, including the prompt modality, the learning paradigm and the training strategy * the authors conduct extensive expriments to verify the effectiveness of this method Weaknesses: * The writing of the article is not clear in some details, such as the typo error in line 290 and inconsistencies in abbreviations. * please see questions listed below Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * I am little confused that does `line 217` means reinitializing the prompts at the begining of each iteration? Or does it mean only reinitializing the set of pseudolabels while the prompts are kept? * Is the process of pseudolabeling done online or offline? If it is done offline, is it time-consuming when there is a large amount of unlabeled data? * Are the abbreviations TZSL mentioned in `line 182` and TRZSL mentioned later referring to the same concept? This might be a bit confusing. * The experimental results in Table 1 show significant improvements compared to CLIP or other baselines. However, there is a slight decrease in some cases. Perhaps the authors could try to explain the reasons behind this? For example, it could be related to the dataset or other factors. * I think there might be a typo error in `line 290`. It should be Table 2 instead of Table 5. * Have you validate the effectiveness on datasets with more categories (e.g, ImageNet-1K), because the quality of pseudo labels might decrease when the category space get larger. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: I think the authors have adequately discussed the limitations of their research. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback. Below we address all your questions. > Does line 217 means reinitializing the prompts at the beginning of each iteration? Or does it mean only reinitializing the set of pseudolabels while the prompts are kept?” That’s correct. In line 217, we say that after each iteration we recompute the pseudolabels and then reinitialize the prompts (both in IFPL and GRIP). The reason behind this choice is to avoid accumulating errors from previous iterations. > Is the process of pseudolabeling done online or offline? If it is done offline, is it time-consuming when there is a large amount of unlabeled data? At each iteration, we generate pseudolabels for the unlabeled data from scratch. As the reviewer points out, the computation time becomes non-negligible with large amounts of unlabeled data. We mitigated this issue via parallelization. We did not explore more sophisticated solutions, which we believe could become a topic of interest for future research, given the significant gain in performance obtained using pseudolabels dynamically. We thank the reviewer for raising this important question. We will add a brief discussion to the limitations section. > The experimental results in Table 1 show significant improvements compared to CLIP or other baselines. However, there is a slight decrease in some cases. Perhaps the authors could try to explain the reasons behind this? The few cases of slight decreases in performance in Table 1 happen for different reasons. For EuroSAT (SSL), the prompt modality in use matters. Indeed, adapting textual prompts with just a few labeled examples (CoOp) is enough to learn as much as we are able to learn with GRIP. This does not happens for visual prompts. The prompt modality has an impact also for Flowers102, indeed under the same settings, using textual prompts bring significant improvement. For FGVCAircraft, the initial set of pseudolabels has very low accuracy for SSL and UL (see Figure 6 in the Appendix) thus the learning process might become ineffective or even damage performance. > Have you validate the effectiveness on datasets with more categories (e.g, ImageNet-1K), because the quality of pseudo labels might decrease when the category space get larger. In our paper, we focused on datasets where CLIP performed poorly, primarily due to domain shift with the training data, as well as domain-specific tasks [31, Section 3.1.5, and Figure 5]. Consequently, we excluded larger datasets such as ImageNet-1K, which were either too similar to CLIP's training data distribution or represented very general domains. Exploring datasets with more categories could be an interesting avenue for future research. --- Rebuttal Comment 1.1: Comment: Thanks for the author's response, and I have already read all of the comments. My main concerns have already been solved, but I still believe that a scalable approach should have lower additional computational costs and be able to scale to datasets with a larger number of categories. --- Reply to Comment 1.1.1: Comment: Thank you for your response! We are glad that your main concerns have been solved. In light of that, we hope that you will consider increasing your support for our paper. If the scalability remains a concern, Reviewer kGE6 addressed this in their last comment: > About the computational budget: this argument as the weakness is not strong for me as there are a bunch of methods developed in many areas, like vision, MT and speech to have computationally efficient pseudo-labeling (when we train one model with regenerating pseudo-labels during training) -- so here I would not be surprised that they are applicable with small modifications and could resolve the computational budget issue. We agree that speeding up pseudo-labeling should not be a major challenge. CLIP even offers new tricks. For example, if only one prompt modality (text or image) is used, the embeddings of the other encoder can be cached. This will easily lead to an almost 2x speed up over a naive approach. Thank you so much for your valuable feedback.
null
null
null
null
null
null
Multi-Fidelity Active Learning with GFlowNets
Reject
Summary: In this manuscript, the authors propose a multi-fidelity active learning scheme based on GFlowNets. The work mainly aims to tackle scientific discovery problems, where one often faces exploring a huge high-dimensional space to identify novel, diverse, high-quality solutions. In many scientific applications, accurately evaluating the quality of the potential solutions (or properties/characteristics of novel candidates) is expensive, hence lower-fidelity surrogate models are frequently adopted for efficient cost-effective evaluation. The current work investigates how to carry out active learning - more specifically, in the context of de novo query synthesis instead of a pool-based active learning scenario - when multi-fidelity oracles/surrogates are available in order to efficiently identify a diverse set of high-quality candidates within a given budget. Strengths: The multi-fidelity active learning scenario investigated in this work is of interest in various scientific discovery/design scenarios. This work proposes how GFlowNet, a popular generative flow network model that can serve as an amortized sampler for drawing high-reward samples from a high-dimensional distribution, can be utilized for active learning under multi-fidelity setting. According to a relatively simple and intuitive procedure outlined in Algorithm 1, this work shows that the proposed MF-GFN has the potential to identify a diverse set of high-scoring candidates at a lower acquisition cost compared to active learning schemes that rely on a single-fidelity acquisition function. Weaknesses: Although the proposed approach MF-GFN is reasonable, there are several major concerns regarding the current study. 1. While the authors claim that the proposed MF-GFN outperforms single-fidelity active learning as well as other multi-fidelity AL schemes, the evaluation results presented in the current manuscript (e.g., Figure 1, 2, 3) are not yet very convincing. It appears that MF-GFN doesn't necessarily outperform other alternatives in a consistent manner, and when it does, the performance gain doesn't seem to be very significant. 2. For single-fidelity AL, the authors only consider the use highest fidelity oracle, which quickly consumes the AL budget. Unless the high-fidelity samples lead to substantial learning improvement, it would be more desirable to use low-fidelity samples. Of course, the actual relative value of high-fidelity vs. low-fidelity samples (considering the acquisition cost) would be different case-by-case, hence it is unclear whether the current examples provide fair comparison between MF vs. SF active learning. To be fair, single-fidelity active learning performance should be evaluated at each of the considered fidelities to provide a more comprehensive picture of how SF AL would work at different fidelities. 3. On Hartmann function, MF-PPO clearly outperforms MF-GFN significantly. What are the characteristics of the Hartmann function that may lead to this discrepancy unlike some other examples considered in this work? 4. Comparisons across different examples should be more consistent. Currently, different sets of methods are evaluated in different examples, and different K values were used for evaluating the top K samples. This looks quite arbitrary and unless there is a clear reason for these choices, the same set of methods should be evaluated based on the different examples using the same K value (or same set of K values). 5. There should be further discussion on the computational cost of fitting h to D and retraining the GFlowNet in each iteration (of batch acquisition). Considering that the multi-fidelity oracles may often be computational models with different computational cost, it may be sometimes (or often) more desirable to reduce training the GFlowNet multiple times and use this computational budget for a larger number of oracle evaluations. As a result, this training cost should be considered in practice when designing and performing AL campaigns, and these practical aspects need to be discussed further. 6. There is currently no discussion regarding the impact of the batch size on the performance of MF-GFN and its comparison to other alternatives. 7. Although (2) is a simple yet reasonable way of evaluating a potential sample considering both the acquisition cost and its value, it is not clear whether this would be a reliable estimate of the "value" of a given sample normalized by its acquisition cost. There should be a better justification for this cost-adjusted utility function or at least some empirical evaluation. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the comments above in "Weaknesses". Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The manuscript briefly discusses some potential limitations of the current study and its broader impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer 3rWi, Thank you for the insightful review. We particularly appreciate the accurate summary, highlighting the specific challenges of scientific discovery that our work tackles, as well as the fact that you identified the most relevant strengths of our submission. Regarding the limitations highlighted in your review, let us address them one by one below. ### 1. “Evaluation results are not very convincing” You indicate in the review that “MF-GFN doesn't necessarily outperform other alternatives in a consistent manner, and when it does, the performance gain doesn't seem to be very significant.” Let us first re-emphasise that our goal was to design an active learning algorithm for scientific discovery to identify “novel, diverse, high-quality solutions”, as you rightfully mention in your review. Therefore, it is important to follow a wholistic approach to analyse the results. Namely, both diversity and high scores are required and good performance in only one of these metrics is not enough for the family of scientific discovery applications that are the target of our work. With this in mind, we note that MF-GFN achieves good performance in terms of both diversity and mean top-K scores in all the tasks evaluated. Crucially, our results show that MF-GFN (as well as the rest of GFN-based methods) are able to discover diverse candidates in all cases. Further, according to our results, MF-GFN significantly outperforms the alternative methods in most tasks. For example, on the AMP task, MF-GFN achieves the best mean energy (0.8) with 10x less budget than the next best methods. On the molecular ionisation potential (IP) task, MF-GFN achieves top mean energy over 100 samples about -4.5) with ~40 % of the budget, while the next best method (MF-PPO) only achieves -6.5 mean energy with the same budget and only achieves its maximum (slightly better than MF-GFN) with 60 % of the budget. Importantly, our results reveal that while MF-PPO is able to find high scores (unsurprisingly, as a RL-based method) the set of top-K candidates has very low diversity in all cases (5-8 times less diverse than GFN-based methods). Finally, we would like to note that the improvements in budget utilisation achieved by MF-GFN can have a significant impact if translated to scientific discovery applications with very costly oracles, as is our plan in future work. By way of illustration, the budget savings displayed by MF-GFN in the molecular tasks with respect to SF-GFN, GFN with random fidelities and random sampling (we deliberately exclude MF-PPO in the comparison because of the lack of diversity) imply for example that many more molecules could be screened by the oracles with the same budget. ### 2. Single fidelity experiments with lower fidelity oracles We agree that this is an interesting comparison. Therefore, we have trained the single fidelity active learning methods in the molecular IP task (since it uses realistic oracles and costs) by using the lowest fidelity oracle. The results are provided in the separate page with figures (Figure 4b). We can see that (as expected) the single-fidelity method with the lowest-fidelity oracle uses less computational budget to find high-scoring candidates. However, it achieves lower mean scores than MF-GFN and slightly lower than SF-GFN with the highest-fidelity oracle, not being to leverage the available budget. ### 3. Why MF-PPO outperforms MF-GFN on the Hartmann task In the context of the Hartmann experiment, GFN initiates its exploration from the origin point. Conversely, the PPO commences from a random starting point within a bounded range, allowing at most three units of displacement (maximum possible displacement is 10 units) along each of the six axes. We hypothesise that this additional advantage aids the PPO algorithm in expediting the discovery of modes within the optimization process. ### 4. Evaluations with different values of K The values of K were selected so that they were consistent with (larger than) the acquisition size (active learning batch size), which in turn was set to approximately reflect realistic batch sizes in practical settings of the corresponding tasks. However, it is true that we selected K=100 in most cases as we could have selected K=75,200 or other values. In order to show that the conclusions of our experiments are consistent for different values of K, we provide a new set of results in the figures page with alternative values of K (Figures 2 and 3). ### 5. Computational cost of fitting the surrogate and GFlowNet While the computational cost of training both the surrogate model and GFlowNet are not negligible, in the practical settings where we expect MF-GFN to be of high value, the cost and time is largely dominated by the oracle queries, especially the higher fidelity oracles. To give a notion of the orders of magnitude, training the surrogate and GFlowNet takes hours, while evaluating a single molecule or material with DFT can take several days, or even weeks in the case of wet-lab experiments. ### 6. Impact of the batch size on the performance of MF-GFN We agree this is an interesting analysis point so we've repeated the molecule IP task with different batch sizes and provide the results in Figure 4a. We notice that the reward curve becomes steeper with higher batch sizes. ### 7. Reliability of the MF-MES acquisition function as proxy for the value of a candidate The effectiveness of MF-MES as a cost-utility function in contrast to other established approaches has been examined in literature [1]. Further, we have implemented the GIBBON formulation of MF-MES due to its notably reduced computational burden compared to the traditional variant. Empirical evidence indicates that the GIBBON formulation consistently outperforms alternative existing methods in the domain of multi-fidelity optimization [2]. [1] Takeno et al., arXiv: 1901.08275 [2] Song et al., arXiv: 1811.00755 --- Rebuttal Comment 1.1: Comment: First of all, I would like to thank the authors for the detailed and thorough response to the review comments. In fact, the authors' rebuttal has addressed some of the doubts/concerns I had regarding the manuscript and has alleviated the concerns regarding the rest. As a result, I will be happy to raise the overall evaluation score accordingly. --- Reply to Comment 1.1.1: Title: Ackowledgement for engaging in the discussion Comment: Thank you for carefully considering our rebuttal, re-assessing the score and engaging in the discussion! Please let us know if any further concerns remain, which we would be happy to discuss.
Summary: This paper introduce an algorithm for multi-fidelity active learning with GFlowNets and demonstrate that the proposed algorithm outperforms the baseline methods. Strengths: The paper is well written and includes two synthetic benchmark tasks and four practically relevant tasks for extensive experiment analysis. Weaknesses: The novelty of the paper's contribution may be questioned as it appears to bear similarities to existing works such as BMFAL (Li et al., 2022) and D-MFDAL (Wu et al., 2021) with the exception of the GFlowNets component. Regarding the experiments, several existing multi-fidelity active learning baselines are missing, including DMFAL (Li et al., 2020a), BMFAL_Random (Li et al., 2022a), BMFAL (Li et al., 2022), D-MFDAL (Wu et al., 2021), and MF-BALD (Gal et al., 2017). Furthermore, the variants of the proposed method, namely GFlowNet with random fidelities and GFlowNet with the highest fidelity, seem more akin to an ablation study rather than proper baseline comparisons. Additionally, the evaluation metrics employed, such as the mean score and mean pairwise similarity, are specific to GFlowNet. To ensure fair comparisons, it is recommended that the author considers adopting the evaluation metrics used in previous multi-fidelity active learning papers. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Where are the experiment results for mean pairwise similarity? 2. What is the number of samples selected at each fidelity level? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The limitations are included. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer TG5X, Thank you for the reviews, for highlighting some of the strengths of work and for suggesting avenues for improvement. Below, we attempt to address your questions and the weaknesses one by one. ### Limited novelty The novelty of our contribution seems to be biggest concern in your review. In particular, you point that our work bears "similarities to existing works such as [1, 2] with the exception of the GFlowNets component". The work in [1] is a relevant contribution in the recent literature on multi-fidelity methods and as such as an important source of inspiration to our work. However, we believe there are crucial differences between our submission and BMFAL. You note that one difference is the introduction of the GFlowNet component. In our opinion, this is a substantial difference since not only is GFlowNet an alternative sampling method, but also it enables the application of multi-fidelity active learning to explore highly structured and high-dimensional candidates in combinatorially large spaces. In contrast, the majority of the multi-fidelity BO literature has focused on lower-dimensional, continuous spaces. Furthermore, GFN introduces the advantage of discovering diverse candidates instead of simply finding the mode (as is the approach in the BMFAL and D-MFDAL). Altogether, this enables the successful application of multi-fidelity methods in certain scientific discovery applications in which there has been little to no multi-fidelity literature to the best of our knowledge. As a matter of fact, we argue that the experiments we provide in this paper, leveraging the availability of multiple oracles in tasks such as DNA aptamer, antimicrobial peptide and molecular design, is a novel contribution in and on itself. The goal of the family of problems tackled by [1] is to find efficient PDE solvers where we have access to solvers at different resolution (fidelities). The goal of MF-GFN is to discover diverse candidates with certain properties in a combinatorially large space, having access to oracles with multiple degrees of confidence (and costs). Similar arguments can go to distinguish the contributions by D-MFDAL [2]---we believe the year of publication of D-MFDAL is 2023 though. First, the family of problems tackled by D-MFDAL is also solving PDEs, rather than diverse, high-scoring candidate generation. Second, we believe that the main (important) contribution in D-MFDAL paper is a novel methodological framework to improve the training of deep surrogate models in multi-fidelity active learning. Our contribution is not on the surrogate model side, but rather in the introduction of a generative model (GFlowNet) to explore the search space and sample diverse candidates. ### Missing baselines We have discussed this in the general comments to all reviewers. To sum up, in this work we tackle the problem of generating diverse, high-scoring candidates from combinatorially large, high-dimensional and structured spaces. This is a substantial difference with respect to most previously proposed multi-fidelity methods, such as the ones you mention. Most works in the BO literature focus on optimizing low-to-mid-dimensional, continuous spaces. Examples are predicting fluid dynamics in a rectangular domain or other problems that involve solving PDEs. On the active learning side, most have focused on pool-based active learning, where the goal is to efficiently train a predictive model by efficiently annotating a pool of samples. This is as well remarkably different to our purposes. For these reasons, the methods that you suggest in your review are unfortunately not directly applicable to the scientific discovery problems we explore in our paper. ### Evaluation metrics The evaluation metrics must be closely linked to the desiderata of the application problems. Given the different nature of the problems tackled in much of the multi-fidelity literature with respect to our work, as we have discussed above, the evaluation metrics must also be different. In particular, since our goal is not to train an accurate model, the typical metrics in pool-based active learning methods are not applicable to our work. In BO, a common metric is the regret. However, since our goal is not the optimisation of an unknown function, then regret would not be an accurate measure of success. In the scientific discovery applications we address, the goal is to find a diverse set of high-scoring samples, and our metrics are inspired by previous work that has tackled similar problems [3, 4]. ### Where are the experiment results for mean pairwise similarity? We have admittedly not made a clear connection between the wording used in Section 4.1, which presents the metrics, and the presentation of results in the rest of the section. Mean pairwise similarity is actually the metric of *diversity* we use in our paper and the diversity results are provided in each plot as the colour of the markers, according to the colour legend shown on top of each plot. We will definitely use your feedback to improve the clarity of this important aspect. ### What is the number of samples selected at each fidelity level? For three randomly initialised runs of the molecules IP task, we provide the number of samples selected at each fidelity level once the total budget is expended. Number of samples in increasing order of fidelity: Seed 1: 635, 431, 305; Seed 2: 628, 490, 365; Seed 3: 1034, 216, 45. We will add these statistics for the other tasks in the camera-ready version. [1] Batch Multi-Fidelity Active Learning with Budget Constraints, Li et al., arXiv: 2210.12704 [2] Disentangled Multi-Fidelity Deep Bayesian Active Learning, Wu et al., arXiv: 2305.04392 [3] Biological Sequence Design with GFlowNets, Jain et al., arXiv: 2203.04115 [4] Sample Efficiency Matters: A Benchmark for Practical Molecular Optimization, Gao et al., arXiv: 2206.12411 --- Rebuttal Comment 1.1: Comment: I thank the authors for the responses. They did address some of my concerns. I am open to reconsidering my evaluation if the authors promise to enhance their literature review with accurate references to the pertinent existing multi-fidelity active learning research. --- Reply to Comment 1.1.1: Title: Brief answer about extending the multi-fidelity literature review Comment: Thank you for following up on our rebuttal answer! The updated manuscript will definitely include an extended review of the multi-fidelity literature. In particular, in Section 2 Related Work - which already includes a review of relevant multi-fidelity methods - we will discuss additional previous work on this area, including BMFAL (Li et al., 2022), D-MFDBAL (Wu et al., 2023) and Gal et al. (2017). We will also further clarify the differences of our work with the existing literature, in line with what we have discussed in the rebuttal. Additionally, if you think any other relevant work is missing, we would be grateful to know. Finally, we remain open to discuss any other aspects that you may consider unresolved. Li et al. Batch Multi-Fidelity Active Learning with Budget Constraints. NeurIPS 2022. Wu et al. Disentangled Multi-Fidelity Deep Bayesian Active Learning. ICML 2023. Gal et al. Deep Bayesian Active Learning with Image Data. ICML 2017.
Summary: The authors adapt the standard GFlowNet framework to include a fidelity measurement for the oracle, and demonstrate on synthetic, biological and chemical datasets that, in almost all cases, MF-GFN outperforms relevant baselines in terms of achieving sampling performance within a fixed budget. Strengths: ### Originality The paper applies multi-fidelity ideas from Bayesian Optimization to the GFN framework. This is the first time such a thing has been done, and the standard GFN framework needed to be updated to sample a fidelity as well. ### Quality The quality of the paper and results is sufficient for publication. The synthetic benchmarks are standard for this area of research, and the biological examples for aptamers and peptides are biologically relevant. The QM results for small molecules are less relevant than other tasks (e.g. ADMET in drug discovery) could have been. ### Clarity The paper is very well written and easy to understand. ### Significance The paper is moderately significant since the fidelity configurations are contrived and simple and likely do not represent experimental drug discovery fidelities (see Weaknesses below). Weaknesses: - the multi-fidelity framework is rather simple and would apply in situations when the oracle is computational (e.g. running DFT) rather than experimental (e.g. running biochemical assay), since computation costs are easily assumed to be uniform and applicable per sample, whereas experimental costs are more complex, can require batch acquisition rather than single sample, and the results can be far noisier in general. While it is infeasible to perform such a study for this paper, a synthetic example could be constructed with such properties. - In the main paper, each task is only run with a single multi-fidelity configuration, but it would be interesting to run the same task but with different MF configurations in order to understand how the distribution of fidelities effects convergence per fraction of budget spent. Appendix D.1 does this once for the aptamer example, but a more comprehensive study, perhaps on synthetic data, would be enormously instructive. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - In most examples, it is not clear how, at each fidelity level, the accuracy of the oracle relates to the cost. E.g. in the aptamer example, is the accuracy of an oracle trained on 1m aptamers 100x worse than the free energy calculation of the secondary structure? What is the relationship in the other examples? - Except for the PPO method, diversity seems to be pegged at its highest level in all examples. Is there a more nuanced discussion you could give about this diversity? Like number of modes? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: There is a sufficient discussion of limitations in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer qmsr, Thank you for your insightful reviews. We appreciate your comments about the strengths and weaknesses of our work. Below, we address each of the weaknesses you pointed out as well as your questions. ### Non-uniform per-sample oracle costs This is an interesting and important point indeed, since it is true that in practice, in many scientific discovery applications the cost per sample is not uniform. However, to the best of our knowledge, there is little to no literature on multi-fidelity methods with non-uniform costs for scientific discovery and little literature on more general active learning and Bayesian optimisation. One early example of active learning with varying annotation costs is the work by Settles et al. (NeurIPS workshop, 2008). Given the early stage of the application of multi-fidelity methods in challenging and practically relevant scientific discovery methods, we humbly believe that our work provides a significant contribution to the field and we hope to incorporate varying costs in future work. As a preliminary comment, in principle, we can incorporate such costs within the MF-GFN framework by replacing the costs $\lambda_m$ with the function $\lambda(x,m)$ which defines the cost of evaluating each candidate with each oracle ### Experiments with multiple cost distributions We agree that this an interesting aspect of the analysis and as you indicate, we included a set of results in the appendix on the DNA task. The costs of the oracles used in the molecular tasks reflect the actual relative costs and trying different cost distributions would not only require significant computational demands, but also impact the practical relevance of the results. Nonetheless, in order to shed more light on this aspect, we are providing additional results on the synthetic Hartmann function in the separate figures page of this rebuttal. As you may see, the results are consistent with those provided for the DNA task. ### Relationship between oracle costs and accuracy Regarding the DNA and AMP tasks, we calibrated the cost differences between oracles by drawing from real-world scenarios where practical experiments, like wet lab experiments, might take hours for sequence evaluation, while online simulations might only require a few minutes (hence, a 100-fold magnitude difference). Furthermore, as the low fidelity oracles of the AMP experimental setting displayed similar explained variances, we assigned equivalent costs to them. In the molecules experiment, we employed commonly used quantum chemistry packages for molecular modelling as our oracles, and the costs were established based on average evaluation times across a batch of 1000 molecules. For the synthetic experiments, we relied on costs and oracles borrowed from existing literature [1, 2]. ### A more nuanced discussion on diversity As you indicate, except PPO, the rest of the methods achieve good levels of diversity, which speaks well about the proposed method. The reason is that the rest of the methods are either random samplers, which naturally produce diverse objects, and GFlowNet-based methods, which have been repeatedly shown to sample diverse candidates, since this was one of the core design purposes of its conception, according to the original paper [3]. In case of doubt, the diversity scores *are not* invariant throughout the active learning rounds, but in fact decrease slightly for GFN- and random-based methods. As you suggest in your review, an alternative angle to analyse the diversity of the samples is looking at the number of modes. We are happy to note that this is precisely the angle of the results presented in Appendix D.2 Energy of Diverse Top-K. In particular, we here restrict the inclusion in the set of top K samples by a measure of similarity to the elements in the set and we then provide the mean score of this set of diverse top-K samples. In other words, this metric can be regarded as the mean score of the top-K modes found. We will provide additional details in this section in the camera-ready version of the paper. [1] A General Framework for Multi-fidelity Bayesian Optimization with Gaussian Processes, Song et al., arXiv: 1811.00755 [2] Multi-Fidelity Bayesian Optimization via Deep Neural Networks, Li et al., arXiv: 2007.03117 [3] Flow Network based Generative Models for Non-Iterative Diverse Candidate Generation, Bengio et al., arXiv: 2111.09266 --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their thoughtful rebuttal and for providing additional studies with synthetic data. Reading through the other rebuttals has helped me understand the contributions of this paper better and its context within the related BO literature. I am willing to raise my rating to Accept. --- Reply to Comment 1.1.1: Title: Thank you for the review and open to further discussion Comment: We sincerely thank you for carefully reviewing our rebuttal answers to both your own concerns as well as other reviewers’. We are glad to read that they have been helpful in clarifying these concerns and improving the understanding of our paper. We are working towards incorporating the insights from this discussion into the paper itself too. If other concerns remain, we will be happy to further discuss.
Summary: In this submission, the authors proposed to apply GFlowNets as a sampler to sample for active learning based on the selected acquisition functions, instead of directly optimizing them in the procedure of Multi-fidelity Bayesian Optimization (MFBO). Even though focusing on active learning applications, the authors presented the method more in the MFBO setting, which aims at optimizing a target function by iteratively querying it as well as several different low-fidelity low-cost surrogate functions. In this work, a multi-fidelity Gaussian Process was used as multi-fidelity surrogates and multi-fidelity MES was chosen as the acquisition function. The main focus of the submission is to adopt GFlowNets for MFBO to query according to the acquisition function and the authors claimed that it improves preferred diversity of queries samples. The authors tested the performance of the proposed method with single fidelity BO with GFlowNet, random fidelity with GFlowNet, random selection, and Multi-fidelity PPO on synthetic Branin and Hartmann functions as well as real-world tasks on DNA Aptamers, protein design, and small molecule design. Although the proposed method does not always achieve the best performance, the authors claim that it has better sample efficiency with comparable score optimization performance. Strengths: GFlowNet was implemented for MFBO, specially for active learning tasks. The authors tested the performance of the proposed method with single fidelity BO with GFlowNet, random fidelity with GFlowNet, random selection, and Multi-fidelity PPO on synthetic Branin and Hartmann functions as well as real-world tasks on DNA Aptamers, protein design, and small molecule design. Although the proposed method does not always achieve the best performance, the authors claim that it has better sample efficiency with comparable score optimization performance. Weaknesses: 1. The main concern is that the methodological contribution is limited. The authors are mostly using the existing acquisition functions as the reward function in GFlowNet to solve multi-fidelity active learning. There is not much theoretical analysis of this GFlowNet-based active learning strategy throughout the submission. A more serious concern is that the submission is very much similar as [1], by considering multi-fidelity settings but the fidelity was simply considered as an additional input variable. The whole pipeline and all the methods are very much the same. [1] Jain, Moksh, et al. "Biological sequence design with gflownets." International Conference on Machine Learning. PMLR, 2022. 2. The explanation of using GFlowNet can be improved. As described in the 194th line of the main text, the joint posterior distribution of the input $X$ and fidelity index $m$ was modeled but with the constraints that a fidelity $m>0$ of a trajectory must be selected once and only once, from any intermediate state. The authors may want to first define the DAG (Directed Acyclic graph) of this GFlowNet model, explicitly explain the allowable state transition, forward/backward policies, etc. 3. The design of the multi-fidelity kernel may need further explanations. Especially, adding fidelity indices as additional input by adopting $K_2(m_1, m_2)$ defined between lines 559 and 560 of Appendix does not seem to capture the difference of $m_1$ and $m_2$ and not really invariant when permuting the fidelity indices. How can this guarantee to select appropriate fidelity to query? The authors may want to discuss the intuition behind this design. Also the reference 68 does not exist in Appendix or main text. 4. The provided code does not seem to have GFLowNet-based implementation but only has random and PPO implementations. 5. Since the diversity of the queried data was advertised as a reason for utilizing GFlowNet, the authors may need to provide such validation results in the main text, for example by explicitly comparing the diversity at the end by different competing methods for the tasks. Also it may be better to provide more information on the PPO setup, for example whether it selects one sample or a batch of samples, because the `diversity' should be automatically taken care of by the acquisition function in the sequential setting and it may be tricky to select a batch of samples in this case. 6. The author used the Top-K score of K candidates in each active learning round. The hyper-parameter K, and other hyper-parameters in GP kernels, could be influencing the results and conclusions. Sensitivity analysis should be provided. 7. If GFlowNet-based sampling is the important contribution, then other MCMC sampling methods by acquisition functions besides PPO should be also benchmarked. 8. There are language errors, for example, 1) there are typos in the 148th and 149th lines of the main text; 2) in the 610th line of the Appendix: ‘where C is the cost if the highest fidelity oracle’ should be corrected as ‘where C is the cost of the highest fidelity oracle’; and many others. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Many tested scenarios appear to be MFBO instead of active learning. Shouldn't the author provide some actual active learning experimental results? 2. How fidelity was taken care in this framework? 3. How sensitive the MFBO performance will depend on different hyperparameters? 4. How does the proposed method compare with either optimization-based and other MCMC sampling based methods? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Both methodological limitations and societal impact were discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer KJZ2, Thank you for your insightful feedback. Regarding the missing GFlowNet code in the submission, this is indeed a mistake which we will fix, as the GFlowNet code is actually based on open-sourced implementations. In what follows, we address each of the weaknesses you mention in your review and answer your questions. ### Limited contribution In your review, you mention that our submission is very similar to [1]. This paper is definitely a source of inspiration for our work but we would like to argue that the extension of the applicability of GFlowNet (GFN) for multi-fidelity active learning is not a trivial contribution, since one could think of multiple options to do so and we have proposed one that is simple yet effective. Our methodological contribution by which we adapt the sampling mechanism to include the selection of the fidelity for each sample aims to strike a balance between the cost and the accuracy of the evaluation. The active learning setup used to discover diverse high reward samples has 4 main components: the surrogate model, acquisition function, the GFN, and the oracles. Notably, we've adapted each of these components to fit the multi-fidelity setting after thorough experimentation. Additionally, we have achieved promising outcomes in the context of small molecules experimentation-a setting that, to the best of our knowledge, has not previously been explored within the multi-fidelity framework. We strongly believe that our work makes a significant contribution to the multi-fidelity active learning literature, enabling settings which were previously intractable. ### Clearer explanation of the GFlowNet DAG Since the details of the GFlowNet design (including state transitions and policy models) are specific to each of the tasks, we included them in each the corresponding subsections of the tasks in Section 4 and Appendix B. In Section 3.3, we explained the details of the multi-fidelity GFlowNet that are general to all tasks. ### Multi-fidelity kernel The multi-fidelity kernel design utilized in this study has been sourced from [2] and we acknowledge the oversight in failing to reference this in our initial submission. They implement a Downsampling Kernel for the data fidelity parameter, in cases where it is relevant, along with an Exponential Decay Kernel for the iteration fidelity parameter, when applicable. As our experimental approach treats fidelity as akin to a data point, the implementation of the Downsampling Kernel has been incorporated. ### Diversity results We are not sure of having properly understood this concern (number 5). Diversity is indeed a core metric to evaluate the results of our method. In the unlikely case you missed this, the diversity results are included in each of the figures in the benchmark tasks, as the colour of the markers at each active learning iteration, according to the colour legend above each figure. As you can see, GFlowNet-based and random-sampling-based methods achieve good diversity scores, in contrast to PPO, as is expected. These diversity results and how they differ between methods are commented in the text. Is your question or suggestion about providing quantitative results in the text? As a matter of fact, in our updated current version of the manuscript, we have added quantitative results not only to the text, but also to the figures, by plotting a numerical scale in the colour maps (see figures attached). ### Details about PPO The PPO experiments are performed in the exact same setup as the MF-GFN, with the only difference being the learning algorithm for the policy. Specifically, the policy constructs a single candidate in a trajectory. To generate a batch of candidates we take samples from this policy. The policy is trained with the standard clipped PPO learning objective. Additionally, to improve the diversity and exploration in the PPO experiments we use a number of random initial steps (without which the policy gets stuck on a single candidate). This number is constrained by an upper limit set to one-third of the total length of the sample. It is intractable to have a batch of candidates generated by a policy since there is a combinatorially large number of candidates to select from. Consequently, we cannot rely on a acquisition function to provide the signal for diversity. ### Influence of K In order to shed more light on the influence of the choice of K for reporting the results of the top-K samples, we have added results of varying the value of K. As you can see, the same conclusions largely apply for different choices of K. ### MCMC baselines In the growing literature of GFlowNet-based methods, MCMC has been used as a common baseline, revealing that, consistently, GFlowNet is more efficient at discovering multiple modes of the target distribution than MCMC, when the support is high-dimensional and very large. For these reasons, while we agree that the set of results would be more complete with MCMC baselines, we decided not to include them in the experimental setup [3, 4, 5]. ### Active learning vs. Bayesian optimization As discussed in Section 2, we make use of surrogate models and acquisition functions typical of Bayesian optimisation, but we are not interested in "simply" optimising the unknown, target function, what connects our work with active learning. However, our work is also not akin to standard active learning, since our goal is not to "simply" learn an accurate predictive model, but rather discovering new, high-scoring, diverse examples. ### “How fidelity was taken care in this framework?” The fidelity is incorporated in all parts of the proposed framework. As detailed in Appendix A, the surrogate model and the acquisition function both account for the fidelity and as described in Section 3.3 the fidelity is a part of the GFlowNet action space. [1] arXiv: 2203.04115 [2] arXiv: 1903.04703 [3] arXiv: 2111.09266 [4] arXiv: 2201.13259 [5] arXiv: 2209.12782 --- Rebuttal Comment 1.1: Title: active learning & fidelity Comment: I thank the authors for the responses. After reading the responses and other reviews, I decided to keep my current score for the following reasons: 1. The presented work is more related to Bayesian optimization instead of active learning. The authors should have performed literature review and experiments in the context of Bayesian optimization for 'discovering new, high-scoring, diverse examples'. It is not clear either if that is the case, how significant is the new contribution compared to the previous paper on using GFlowNet for Bayesian optimization. 2. It is understandable to take fidelity as 'a part of the GFlowNet action space. However, the relationships between different fidelity models, accuracy as well as cost, were not taken care in the presented work. If that is indeed case, again, the claim of new contributions to 'multi-fidelity active learning' is not convincing. --- Reply to Comment 1.1.1: Title: Clarification on novelty and related work Comment: Thank you for your response! We would like to clarify some details to address your concerns: 1. As we highlight in the paper in the paragraph starting L49 as well as our common response to all reviewers, the problems of scientific discovery we study are technically different from Bayesian optimization. As opposed to searching for a single candidate maximizing the value of a black-box function, we are interested in searching for diverse modes of the black-box function. Additionally, we are interested in the practically inspired scenarios where the search space is not continuous but rather discrete and structured (for example molecules). As we elaborate in Section 2, where we discuss literature from Bayesian optimization, active learning and active search, this is a setting which to the best of our knowledge - has not been studied in any prior work. We would be happy to add any work you think is missing. 2. It is not merely the fidelity being a part of the GFlowNet action space, but the fidelity being accounted for in the reward function as well. The acquisition function we use (reward for the GFlowNet), described in Appendix A, considers the mutual information between the value of the selected candidate at the selected fidelity with the maximum value of the highest fidelity oracle. This is also scaled by the cost of the oracle at the selected fidelity. In effect the reward of the GFlowNet accounts for the quality (accuracy) and cost of the different fidelities. Additionally in the Appendix we provide details about the effect of different fidelities on all tasks. We hope you will re-consider these aspects in your decision.
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their constructive feedback about our paper. We are sure that the changes motivated by this feedback have improved the present manuscript and will also positively impact our future work. We have responded to each reviewer individually, trying to address every concern and question. Here, we would like to emphasize a few points about concerns that are shared by more than one reviewer. We would also like to mention that we are including in the rebuttal a figures page with 1) a visual summary of the proposed algorithm and 2) results of additional experiments addressing main concerns and questions. **Context**: The method that we present in this paper, multi-fidelity active learning with GFlowNets (MF-GFN), is motivated by a growing need in certain scientific discovery applications for efficient machine learning models that can effectively leverage the data and tools available for scientists. Specifically, in areas such as drug discovery and materials discovery, scientists have access to multiple tools that serve as proxies to characterize properties of potential new candidates. This motivates the need for multi-fidelity approaches which can operate on high-dimensional structures (such as the one we present here) as well as empirical results in science-related tasks. **Absence of Bayesian Optimization and Active Learning Baselines**: While the fields of multi-fidelity Bayesian optimization and active learning have seen important progress in recent years, the specific characteristics of problems such as drug discovery and materials discovery are different from the settings typically targeted by most multi-fidelity methods, such as engineering or scientific problems involving, for example, finding solver parameters for solving partial differential equations, a common task found in this literature. These specific characteristics are that the search space is highly structured (for instance, small molecules, proteins or crystal structures) as opposed to optimization in a continuous Euclidean space as is typical in many PDE problems; high-dimensional and also combinatorially large (for example, the estimated search space for small molecules is about $10^{60}$). These properties limit the direct application of standard Bayesian optimization methods in such problems. Consequently, there are, to the best our knowledge, little to no previous multi-fidelity methods directly suitable for the tasks we tackle in this paper. For these reasons, we could not include multiple baselines for comparison, which is something that some of the reviewers would have liked to see. Instead, we constructed a baseline based on competitive reinforcement learning method (PPO), by using PPO as a multi-fidelity sampler instead of GFlowNet. **************************************************On Diversity:************************************************** The scientific discovery tasks we tackle in this paper also have distinct goals, compared to typical tasks where multi-fidelity methods have been used in the literature. In particular, as opposed to methods where Bayesian optimization is the most suitable method, here the goal is not to find the optimum of an unknown target function, but rather find *multiple* diverse modes of the function. This differs also from the class of problems where pool-based active learning is typically used, where the goal is to train an accurate predictive model through an efficient annotation strategy. For these reasons, the metrics used to evaluate our method needed to be adapted from the metrics used in the BO and AL literature. Instead, we here used metrics proposed in the literature tackling similar scientific discovery problems [1, 2], such as the mean top-K score and the diversity of the top-K candidates. **Comparison with MF-PPO**: Some reviewers mention that the MF-GFN does not always outperform MF-PPO and the other evaluated baselines. We would like to emphasize that because of the nature of these metrics and the desired properties they aim at measuring, both metrics (mean score and diversity of the batch) need to be assessed jointly. In other words, it is not enough to just have high mean score if the diversity is low, because that would be akin to having just one candidate with slight variations. Obviously, diversity by itself with low mean score is not only useless but trivial to obtain. With this in mind, we would like to note that MF-GFN achieves the best results out of all the methods evaluated when considering both metrics simultaneously. This is in stark constrast with PPO, which undoubtedly is able to find candidates with high scores in most tasks, but at the expense of diversity. **Experiments on Synthetic Functions**: Some reviewers noted that MF-GFN does not excel in terms of performance in the synthetic Branin and Hartmann functions. These tasks were included in the evaluation for the sake of completeness, because these are tasks familiar to the multi-fidelity Bayesian optimization community. The goal with these tasks was to show that MF-GFN is also able to obtain results comparable to other multi-fidelity BO methods. However, the relative simplicity of these tasks does not require sophisticated methods. It is in high-dimensional, structured and very large spaces where MF-GFN provides the largest advantage, as we show in the evaluation on the remaining, science-related tasks (DNA, AMP and small molecules). We hope these responses as well as the individual answers to each reviewer shed light on your concerns and questions, and we look forward to further discussion. [1] Biological Sequence Design with GFlowNets, Jain et al., arXiv: 2203.04115; [2] Sample Efficiency Matters: A Benchmark for Practical Molecular Optimization, Gao et al., arXiv: 2206.12411 Pdf: /pdf/c91b41f657fd6a705ede17bbbf785bfc73408da1.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper offers a new framework for multi-fidelity active learning using Generative Flow Networks. Given the recent success of GFlowNets as models for sampling diverse candidates among terminating states in a DAG, the authors attempt to leverage this property to put a new spin on active learning, where instead of sampling from a pool of unlabeled candidates, new objects are sampled from the entire construction space. Furthermore, the GFlowNet is also responsible for sampling the fidelity at which to evaluate the object, which they later show can provide advantage over just sampling the objects themselves. Two important components of this framework are 1) the multi-fidelity GP proxy model, which is a surrogate for the true measure of 'goodness' of a sampled object, and 2) the multi-fidelity acquisition function, which is used as input to the reward function of the GFlowNet to encourage exploration of the construction space. The authors then justify this framework with results from a variety of domains showing that MF-GFN offers promising results in terms of its ability to effectively leverage the lower fidelity oracles to reduce the total cost of exploration compared with only using a single oracle. Strengths: The paper provides an original way of injecting the desirable properties of the generative flow network into the hot field of active learning, which is especially important for guiding modern research in being able to know what experiments to run next. The paper communicates the main ideas relatively clearly and effectively, and is not limited by restrictive assumptions. The paper also backs up their claims with experimental evidence from a variety of domains. Weaknesses: I felt like the biggest weakness of the paper was probably the lack of thorough results and as noted in the Limitations section, the lack of practical oracles. To be an effective framework for active learning, I think it would have been much more compelling to tackle some real problems that domain specific scientists are working on, instead of the synthetic (Branin and Hartmann) tasks where MF-PPO seems to do just as well as the proposed model. Additionally, I felt that the paper was a bit rushed, as there were some glaring typos (e.g. line 303) and some important aspects that were not entirely clear, such as the actual reward function that was used by the GFlowNet. This was not made explicit until the supplementary material and made it difficult for me to understand how the acquisition function precisely tied into the rest of the framework. The last thing that I think would be helpful for the reader to understand is to give details on the budget; things like how long does it take the GFlowNet to sample one object, how long does it take for the oracle to evaluate the object, etc. would be helpful for the reader to put the timescales of things into perspective. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: * One thing that was odd was the presence of the active learning round in the reward function (in the exponent of rho). For rho != 1, it seems like the GFlowNet would have to have the active learning round as input, otherwise it wouldn't know how to appropriately match the flows. However, I could not find this detailed in the paper, and so I was wondering if this could be clarified. * What motivated the choice of using MES for the acquisition function? It felt like this ended up taking a lot of the paper to contextualize, and it seems like there are simpler choices (such as UCB or plain ES) that could have led to a more focused discussion of the MF-GFN itself. * A nice feature of GFlowNets is the ability to train from partial trajectories. Because the goal of the paper is to reduce the overall cost of exploring a design space, I would be interested to hear if any consideration has been made to try to "early stop" some of the trajectories from the GFlowNet, for example if the estimated flow has gone below some threshold. It seems like this might be able to further reduce the cost of sampling. Or if this was not considered, was this due to the relatively cheap cost of generating samples from the GFlowNet compared with the relatively expensive evaluation of the oracle? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer tpVK, Thank you for your review. We appreciate the positive comments about our work as well as your helpful feedback to improve our paper. We have made sure to fix the typos and improve the clarity in the updated version. In the remainder of our rebuttal, we address each of your concerns. ### Breadth of experiments According to your review, the main concern lies in the paper's limited results and the absence of practical oracles. Additionally, you suggest that addressing challenges pursued by domain-specific scientists would have been more impactful than focusing on synthetic tasks like Branin and Hartmann. We would like to first note that our paper includes results on the commonly used Branin and Hartmann tasks just for the sake of completeness. The core of our experiments are the four other practically relevant tasks (Section 4.4), namely: DNA aptamer design, antimicrobial peptide design and small molecules generation. Second, our experiments involving small molecules were carried out using oracles that hold practical significance. This aligns with challenges actively pursued by domain-specific scientists. Discovering molecules with higher negative adiabatic ionization potential holds practical importance for applications like organometallic synthesis and the design of organic semiconductors. As detailed in Section 4.4.3, our MF-GFN method discovers desired molecules with just half the computational budget by the standard single oracle setup. As you note in your review, on small-scale tasks such as Branin and Hartmann, existing methods such as MF-PPO are effective enough and our proposed MF-GFN does not provide a substantial advantage. It is in tasks involving exploration of high-dimensional, very large spaces where MF-GFN provides significant improvements in terms of exploration efficiency and diversity, as shown by our results in Section 4.4. As an additional note, it is also relevant that the MF-PPO algorithm was run with an advantage over MF-GFN: We applied a “warm-up” start of the PPO algorithm with several random steps, because in experiments without that, it was unable to find the modes in the high dimensional space. ### Details about budget and time For the DNA and AMP tasks, designed as inexpensive prototypes of practically relevant (more expensive) experiments, the oracles are either pre-trained neural networks (for instance, see Table 4) or lightweight Python libraries, such as NUPACK (Appendix B.3) whose computational cost is not the bottleneck of the total training time. In contrast, the oracles used in the tasks with small molecules are quantum chemistry packages commonly used in molecular modelling (see Appendix B.5) whose computational cost is not negligible and is actually orders of magnitude more costly than sampling from the GFlowNet policy. Regarding the computational cost of sampling from a trained GFlowNet, it is nearly negligible compared to the oracle evaluations as we mention in the paper (Section 3.2). This is why we can afford to sample a large number N of samples to then select the best K, according to the acquisition function. ### Active learning round in the reward function The acquisition function (MES) exhibits increased sparsity as additional samples are discovered. In order to facilitate optimization, a linear reduction of the parameter β (with a scaling factor denoted by ρ) is implemented at each successive active learning round (hence the active learning round number is an input to the reward function) so as to scale up the rewards. Note that within an active learning round the GFlowNet samples from this fixed reward function and thus the policy need not be conditioned on the active learning round. ### Choice of MES as acquisition function We agree that it would be interesting to repeat the experiments with alternative acquisition functions, such as a multi-fidelity version of UCB [1], but since our main contribution is the introduction of GFlowNet as a generative model in the multi-fidelity active learning setting, experimenting with multiple acquisition functions was out of scope. The choice of a competitive acquisition function such as MES makes random sampling a stronger baseline, as well as more efficient than plain ES [2]. Secondly, we would like to note that recent multi-fidelity methods proposed in the literature [3, 4, 5] have adopted or adapted information theory-based acquisition functions, such as MES, rather than UCB and EI. Finally, it is worthwhile to mention that we have adopted the GIBBON formulation of MF-MES which has been experimentally shown to consistently outperform other existing methods in the context of multi-fidelity optimization. ### Train GFlowNet from partial trajectories We are unfortunately not sure to have understood this question. If you refer to GFlowNet loss function that are able to assign credit to partial trajectories (Pan et al., 2023), we agree that this will likely improve the training efficiency and future work should consider the use of such objectives. [1] Multi-fidelity Gaussian Process Bandit Optimisation, Kandasamy et al., arXiv: 1603.06288 [2] Max-value Entropy Search for Efficient Bayesian Optimization, Wang and Jegelka, arXiv: 1703.01968 [3] Disentangled Multi-Fidelity Deep Bayesian Active Learning, Wu et al., arXiv: 2305.04392 [4] Deep Multi-Fidelity Active Learning of High-dimensional Outputs, Li et al., arXiv: 2012.00901 [5] Batch Multi-Fidelity Active Learning with Budget Constraints, Li et al., arXiv: 2210.12704 [6] A General Framework for Multi-fidelity Bayesian Optimization with Gaussian Processes, Song et al., arXiv: 1811.00755 --- Rebuttal Comment 1.1: Comment: Thank you for the in depth response! It has helped me better understand your work. I believe the my current score is fair as the paper does produce intriguing results with a novel active learning pipeline, but as other reviewers have pointed out, the extent of the novelty may be limited as there are many similarities to [1], albeit with a different acquisition function and fidelity as an output of the model. I also think having a complete code base with the GFlowNet implementation would further strengthen this paper. [1] Jain, Moksh, et al. "Biological sequence design with gflownets." International Conference on Machine Learning. PMLR, 2022. --- Reply to Comment 1.1.1: Title: Brief answers about novelty and code availability Comment: Thank you for reading our rebuttal and engaging in the discussion. In your latest response, you mentioned as remaining concerns that the novelty with respect Jain et al. (2022) and the availability of a complete codebase including the GFlowNet implementation. Let us briefly answer these two points. Regarding the code, we can guarantee that the we will release a complete implementation, as it builds upon existing open source libraries and reproducibility and availability of our code is a core principle for us. We regret that, by mistake, we did not include the GFlowNet code in the submission. Regarding the differences with respect to the work by Jain et al. (2022), we would kindly point you to our [answer to Reviewer KJZ2](https://openreview.net/forum?id=2ZtGWNn37W&noteId=LLDo6GMOiE) under the section “Limited contribution”. In brief, while both papers deal with active learning with GFlowNets for scientific discovery, the multi-fidelity component has involved the adaptation of the four main parts of the algorithm: GFlowNet, surrogate models, acquisition function and oracles. Furthermore, we have provided an extensive empirical evaluation of our proposed multi-fidelity algorithm. We hope we have shed additional light on these aspects. If other concerns remain, we would be happy to discuss further.
null
null
null
null
null
null
Implicit Convolutional Kernels for Steerable CNNs
Accept (poster)
Summary: This paper tackles one problem of Steerable CNN: one needs to analytically solve a group G-specific equivariant constraint (eq 2 in the paper) in order to obtain the basics for the kernel. The authors propose to avoid this analytical solution by using a G-equivariant MLP to parameterize a G-steerable kernel basis (hence called implicit kernel). By doing so, they obtain a framework that allows one to achieve equivariance to various groups even when an analytical solution for the kernel basis is not know. The authors provide a theoretical proof that MLP equivariance is sufficient for building an equivariant steerable convolutional layer and empirically show that the suggested framework is useful in practice. Specifically, they show the relevance of the model being able to adapt to smaller G < O(n) (as opposed to assuming O(3) as in SEGNN) using the N-Body simulation system. They show the generalizability of their method on the ModelNet-40 dataset where the proposed technique works better than Standard Steerable CNNs when the analytical solution for the basis is not available. And finally they show the flexibility of their method that allows one to introduce additional constraints (such as bonds between molecules or atoms) on QM9 dataset. In both the last two tasks the authors compare their method with an extensive list of SOTA methods. Strengths: - The paper tackles an important limitation of steerable CCNs. - The proposed solution is shown to be effective, at least when the analytical solution is unknown. Hence constitutes a detectable and positive improvement for the community. - The paper reads well. Weaknesses: In my opinion there is an opportunity to strengthen the paper with a bit more empirical analysis. Since the kernel is learnt, it might be insightful to know to which extent the learnt solution is equivariant. Currently accuracy on downstream tasks is used as a proxy for that, but a specific analysis on equivariance violation could further enhance the paper. For example, is there a relation with the MLP size and the ability to learn the equivariance? Etc... Some parts could be more clear or more details could be provided. There are missing details about some experimental tasks, or how the model size used compares to the SOTA, etc… Please see the questions below for a more concrete list. The paper is not accessible to the wide ML community. Specifically the paper makes a lot of assumptions about the reader’s knowledge. I don’t think this is a reason for rejection but it would be great to make an effort, whenever possible, to make the paper more accessible. Just a few examples but there are many through the paper: - not all readers might be familiar with the concept of natural action (line 82). I’d recommend to clarify it. - "matrix containing the Clebsch-Gordan coefficients". Could explain what they are or add a reference. - Some of the notation used could be clarified, for example not everyone is familiar with the outer semidirect product used. It would be great to define it. **Minor notes:** - Definition 3: I’ would consider adding “for all g in G in x in X” - Line 289 “it’s” ==> it is (to be consistent with the rest of the paper) - Line 314 refers to Table 5.3 but I think it might be Table 1? - In the Bibliography the name of conferences are not consistent: sometimes it uses the full name of the conference and the abbreviation, some other just the abbreviation, other again just the full name. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I think the paper is good. There are opportunities to strengthen it by addressing some of the weaknesses mentioned above and by addressing the questions here. - Do the authors think it’s useful and possible to provide the empirical analysis mentioned in the weaknesses. - Can the author make an effort to make the paper more accessible? - In section 2 the authors write “Trivial representation: all elements of G act on V as the identity mapping of V , i.e. ⇢(g) = I. “ This does not seem to satisfy the definition 2, specifically this mapping does not seem to be invertible. Is this statement correct? - Can you describe maybe with an example how you would build the G-equivariant MLP? Section 3.4 would benefit from a more concrete explanation (even if in the appendix). Even better if the authors would consider to publicly releasing the code. - “We choose the model’s hyperparameters to have a similar parameter budget to the SEGNN”. Can one be more specific? How many parameters for example were trained in SEGNN compared to the proposed model? - In the ModelNet-40 experiment there is a sampling step involved. The Figure reports results with error bars using 5 runs but it’s unclear if the 5 runs are with different initial seeds or different sampling. Also is 5 runs sufficient? - In Figure 2: Do the authors have any intuition about why there is such a large STD for G=M but not for other groups? - 5.3: The task for this experiment could be explained better. The dataset is clear but what is the task? - QM9 task: “Both models we reported here have approx. 2 · 10^6” How does this compare to other models? - For the QM9 tasks more details about the meaning of the regressed energy values would help. Not everyone if familiar with this dataset and task. **Appendix B2** - For the N-Body “The run-time, on average, was 5 min. “ On which machine? And what is “run-time”? Per Single point? Per Bach? For the whole training? - For ModelNet-40 the machine is specified but the what is ”Runtime” should still be clarified. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The papers states the limitations in within the results discussion. For example
: - ”The only statistically significant negative difference is presented for G = SO(3), for which a tailored and hence optimal kernel basis is already available.” or - “Although we do not outperform more task-specific and involved approaches such as PointMLP [31],” or - "For the remaining energy variables (G, H, U, U0, ZPVE) and R2, our model significantly falls behind the task-specific benchmark approaches such as DimeNet or PaiNN.” Additionally there is a limitation section in Appendix C. I would encourage the authors to bring the limitation section from the appendix into the paper and summarize the limitations found during all the experiments in the same section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer found our contribution to be a positive improvement for the community. We will address the reviewer's questions and suggestions one by one. ### Weaknesses - We would like to emphasize that the learned kernels are already equivariant to a pre-defined group $G$ by construction (see Lemma 1). Some design choices, e.g., non-linearity, might loosen the exact equivariance, but the problem is characteristic of the framework of Steerable CNNs as a whole. We also emphasize that we do not learn the group $G$ - it is instead a hyperparameter of the model. - We will include more details concerning the computational complexity and memory cost of all models in the supplementary material of the camera-ready version. - We agree with the reviewer that some concepts can be further clarified. We will use the additional space of the camera-ready version to include additional theoretical details and make the manuscript more accessible. ### Questions - We address the question in point 1 of weaknesses. - Definitely. We will add theoretical details on steerable CNNs in the camera-ready version to improve accessibility as well as improve the flow to improve readability. - We specify that a representation $\rho(g)$ of a group $G$ should map into the set of invertible matrices rather than be faithful as the reviewer suggests. Therefore, the constraint is satisfied for trivial representations, as $\rho(g) = I$. - We will indeed release the code with the camera-ready version. We also aim to expand section 3.4 to make it more accessible. It is worth mentioning that the procedure described in section 3.4 is already implemented in public libraries, for example, $\texttt{escnn}$. - Canonical SEGNN model has $147$K parameters and the proposed model has $93$K parameters. - We use a different seed for each run. Five runs is a typical choice in deep learning experiments to estimate statistics. Moreover, given the low standard deviation observed, we find that five runs were sufficient in practice. - While it is hard to give a precise answer, we hypothesize that this happens since $G = M$ and $G = Inv$ lead to much less constrained models compared to other groups since those groups are very small (each has $2$ elements). Furthermore, learned kernels might vary significantly, leading to high standard deviation. - The task is to classify point cloud models. ModelNet-40 has 40 classes of furniture. In our experiments, we evaluate the average accuracy across all classes. - For comparison, SEGNN has $1 \cdot 10^6$ parameters, PaiNN has $0.6 \cdot 10^6$ parameters, SphereNet, DimeNet, and GemNet all have around $2 \cdot 10^6$ parameters. - We agree that more detailed information regarding the QM9 dataset should be included and will add a brief description of the variables in the supplementary material. - The training was performed on 1 NVIDIA Tesla V100 GPU, and the run-time refers to the whole training. We will include those clarifications in the supplementary material. - The run-time (training time) varied across different $G$. On average, it took $4$ to $8$ hours for a model to converge. --- Rebuttal Comment 1.1: Title: Thanks for considering my suggestions and addressing my questions Comment: I would like to thanks the authors for answering my questions. I am satisfied with the answers and I believe that if the authors will include the modifications suggested in the manuscript the paper will be stronger. The only two comments (none strictly needed for a weak acceptance but something for the authors to consider) that I would like to add are: 1. On the first weakness: as the authors mentioned, there is a gap between the theory and the practical implementation. What I think would add value is to test to which extent those design choices ( e.g., non-linearity) have loosen the exact equivariance. 2. On the statement "Five runs is a typical choice in deep learning experiments". I believe this is an arguably bad choice, especially in a case (like this one) where there are two sources of stochasticity (random seed and sampling step). I believe that a more statistically significant analysis would have made the empirical evaluation stronger (but again not needed for weak acceptance). --- Reply to Comment 1.1.1: Comment: 1) In this work, we use quotient nonlinearities (see [7] for details) that employ discretised Fourier transform, which renders them approximately equivariant. We think the effect of such nonlinearities is beyond the scope of this paper but has been actively studied in a few previous works. See, for example, [a], [b], [c]. The key finding is that by keeping the number of samples sufficiently high, one can achieve a negligibly small equivariance error. In our experiments, the equivariance error (MSE) was approximately $10^{-5}$. 2) We would like to emphasise that the variance at five samples is particularly low for most groups in the MN-40 experiment, so we don’t expect the average performance to change much with more samples. [a] Franzen, D., & Wand, M. (2021). General Nonlinearities in SO(2)-Equivariant CNNs. Neural Information Processing Systems. [b] Poulenard, A., & Guibas, L.J. (2021). A functional approach to rotation equivariant non-linearities for Tensor Field Networks. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 13169-13178. [c] Cohen, T., Geiger, M., Köhler, J., & Welling, M. (2018). Spherical CNNs. ArXiv, abs/1801.10130.
Summary: This work considers the setting of steerable networks, in which a particular kind of group-equivariant (alternatively, G-steerable) kernel is translationally convolved with an input vector (or matrix) field. The group-specific design of such a network focuses on the derivation of the group-equivariant kernel. Although prior work laid out a procedure for deriving an analytic, orthogonal basis of functions whose span encompasses all such G-steerable kernels, this paper presents a conceptually simpler method: parametrizing the kernel with an equivariant map. This method is flexible and allows conditioning the kernel on other invariant or equivariant attributes. They test their method on datasets of point cloud classification, synthetic N-body simulations, and molecules (QM9). Strengths: Originality: Using an equivariant network to parametrize a group-steerable kernel is a natural and elegant idea. Although implicit kernels have been used in other settings, their application to steerable kernels is novel and well-founded. Conceptually, it is simpler than previous analytical approaches to deriving explicit, orthogonal bases. Clarity: The proposal is clearly described, and the paper provides a helpful consolidation of related work overall. Quality: The proposal is indeed equivariant, and experiments show good results over a sufficiently wide range of settings. Significance: Group-steerable networks are widely used, and this formulation seems to apply to all compact groups of interest. Weaknesses: 1. It seems to me that the vast majority of application problems involving subgroups of $E(3)$ acting on $\mathbb{R}^3$ simply involve $SO(3)$ or $O(3)$. In that sense, the scope of this paper may be limited. Nonetheless, this does not preclude other groups of interest from arising in future applications, but the paper could be strengthened by providing additional motivating examples. 2. The tradeoffs between this method and the analytical method of [7] are subtle, and indeed they require many of the same tools (for example, knowing the irrep structure of the group of interest, so that one can construct equivariant functions over the input space, even if they are not required to be orthogonal). As noted in the questions, I do not fully understand the benefits of this method as compared to [7], although it is clearly more general and simpler to articulate. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. My biggest question is that I do not understand the second paragraph of Section 3.2, comparing this approach with that of [7]. The paragraph seems to claim that the basis produced by [7] might be too large, but I thought [7] would produce an orthogonal equivariant basis — and if it is orthogonal, how can the “size” of the basis be suboptimal (as it is determined the dimension of the vector space of steerable functions)? 2. On a related note, both this paper and [7] seem to require knowledge of the irreps of the group of interest. Can the authors elaborate on which of the ingredients (a) through (d) listed at the bottom of page 5 of [7], are troublesome? Section 3.2 seems to imply not that the subgroup method for obtaining a steerable basis is computationally intensive, but that it produces a lower quality output — is that the case? 3. If already provided with a steerable basis in the style of [7], could one still “condition” on physical info (at least invariants) by just storing a bank of coefficients, and choosing different coefficients based on endpoint features? If so, this could make more sense as a baseline experiment. Of course, such a method may not choose coefficients smoothly as a function of the endpoint features, which is one potential benefit of the authors’ proposed conditioning method. 4. To clarify, if you had a G-equivariant MLP, is the primary barrier to using this G-MLP to derive a G-steerable orthogonal basis the orthogonalization? In other words, one could set random weights to the G-MLP, and then try to orthogonalized to get a basis of equivariant functions? (I think the problem is this: orthogonalization can be done via Gram-Schmidt, but only if one can compute inner products exactly — with $\mathbb{R}^3$ as the base space, it does not seem like one could do this exactly. However, I would be happy to hear the authors’ thoughts on this, as if there were an easy way to reduce a G-equivariant MLP directly to a steerable basis, it would perhaps weaken the argument that their proposed method is easier than computing an explicit G-steerable basis.) 5. In Section 5.2, why is the baseline an $O(3)$-equivariant net? This is an unfair comparison for a problem that is not $O(3)$-equivariant, since such a network can never express the ground truth solution. Can the authors report performance relative to a generic net instead (e.g. only translation invariant), or an approximately equivariant network? 6. What do the authors hypothesize causes the difference in performance shown in Figure 3? For example, is it conditioning on radial vectors? As a quick note, here is one other piece of slightly related work at the intersection of equivariance and implicit convolutional kernels: “Relaxing equivariance constraints with non-stationary continuous filters” by Ouderaa et al 2022. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: There are not potential negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are happy the reviewer found our idea elegant and well-founded and appreciated the clarity of the presentation. We now would like to address the concerns that were raised. ### Weaknesses - We would like to emphasize that 2 out of 3 experiments deal with symmetry groups other than $E(3), O(3), SO(3)$, namely MN40 and N-body problems, where $G = SO(2)$ or $O(2)$. However, we agree that in future, there might be applications that would find our approach particularly relevant. For example, we find that the task of molecular binding is particularly relevant, which we try to imitate with the N-body experiment. Besides, the generalization to pixel and voxel data is quite straightforward, for which our work provides the necessary theoretical foundation. - We appreciate the reviewer's question regarding the comparison between our method and [7]. We would like to note that the benefits of our approach are 1) better performance for the same $G$ (see MN40 experiments), 2) enabling conditioning (see QM9 experiments), and 3) overall simplicity of implementation. To better highlight the difference, we now provide a table (see the PDF file attached to the global rebuttal) with key ingredients for both approaches. The key difference is that [7] requires an ad-hoc $G$-steerable basis, which might not be trivial to develop while we alleviate it. Additionally, in [7], authors use group restriction, which has its own limitations described in 3.2. ### Questions - To clarify the difference with [7], we will further expand the section in the camera-ready version. Besides, we now include the requirement table, which might be of help. Regarding the question: the $G$-steerable basis used by [7] is only a finite subset of the analytical basis for the infinite-dimensional space $L^2(\mathbb{R}^n)$. However, because the space spanned by the basis must be closed under $G$, the choice of $G$ limits the degree of discretization of the analytical basis. Hence, group restriction might lead to a suboptimal basis because the $G$-optimal discretization might not be optimal for another group $G' \subset G$. - In our opinion, the most troublesome ingredient from [7], section 3, is *b*, the $G$-steerable basis $\mathcal{B} = \{Y_{ji}\}_{ji}$ for $L^2(\mathbb{R}^n)$), which we tackle in the paper. - We find that alternative viable yet limited. The main concern is that the strategy is only feasible for discrete invariant variables, while continuous invariant variables require ad-hoc adaptations. One also has to ensure equivariance when dealing with $G$-equivariant features, which might be far from trivial as it is within our framework. - While the approach suggested by the reviewer might be a valid initialization scheme for an implicit steerable kernel, we want to emphasize that the issue with a $G$-steerable basis doesn't root in the orthogonality but rather in its finiteness (see p.1 in Questions). - We agree with the reviewer that a non-equivariant baseline would be appropriate and, furthermore, include it in the experiment. Below we indicate the performance (MSE averaged over 5 runs) of a message-passing neural network (a non-equivariant counterpart of SEGNN) with SEGNN and our model for reference: | Stiffness | 0 | 1 | 5 | 10 | 25 | 50 | 100 | 200 | 1000 | |:------------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:| | MPNN | 0.0022 | 0.0031 | 0.0068 | 0.0087 | 0.0030 | 0.0162 | 0.0560 | 0.0978 | 0.1065 | | O(3)-SEGNN | **0.0009** | 0.0092 | 0.0117 | 0.0183 | 0.0291 | 0.0151 | 0.0229 | 0.0938 | 0.1313 | | SO(2)-CNN-IK | 0.0010 | **0.0010** | **0.0009** | **0.0008** | **0.0008** | **0.0008** | **0.0014** | **0.0043** | **0.0162** | - We indicate that Fig.3 depicts the benefit of using more flexible implicit $G$-steerable convolutions instead of ones using the approach described in [7]. We argue that the difference is caused by the optimality of implicit basis (see p. 1 in Questions) (sub-optimal for [7], task and $G$-specific for ours). - We are happy to include the paper mentioned since we find it indeed highly related. --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: Thank you to the authors for their thoughtful response. I am inclined to raise my score, but I have just a few remaining questions and comments. 1. Thank you for the new experiment with a non-equivariant baseline. I am satisfied with these results. 2. Unless I am mistaken, Lemma 1 is a basic linear algebraic rearrangement of Equation 2. Solving one seems equivalent to solving the other. Therefore, the response to reviewer fqKN’s first question does not make sense to me. 3. Do the advantages of this work hinge on the assumption that the equivariant MLP is a universal approximator? 4. In the case of SO(3), the spherical harmonics weighted by radial basis functions provide a G-steerable basis for SO(3) [3D steerable CNNs, Weiler et al 2018]. To what extent (if any) does this phenomenon generalize to other groups? I find this important to check, because I would like to clarify whether knowing an analytic form for the irreducible representations of a group can provide a G-steerable basis. --- Reply to Comment 1.1.1: Comment: 2. It is correct - the two constraints are equivalent. However, the solution strategy in [7] requires an explicit steerable filter basis (see the requirement table in the global rebuttal). Concerning our approach, the algebraic rearrangement we employ is convenient because it allows us to replace the manually constructed (ad-hoc) basis with a generic $G$-MLP. 3. Yes, it is correct. 4. Spherical harmonics are not the irreps of SO(3); they are steerable functions over the 2-sphere. The analytical solution for $SO(3)$ [3D steerable CNNs, Weiler et al. 2018] implicitly assumes 1) decomposition of $R^3$ into orbits of $SO(3)$, i.e. as a product of $S^2 \times R+$, 2) a steerable basis for each orbit (i.e. the spherical harmonics) 3) a basis across the orbits (i.e. the radial component). In general, knowing the irreps of a group is not sufficient to build a steerable basis.
Summary: The paper proposed implicit neural representations via MLPs to parameterize G-steerable kernels. Additionally, the performance of the approach is extensively empirically tested against alternative methods on multiple tasks. Strengths: - Conceptually In many cases, steerable group equivariant neural networks yield more accurate equivariance in practice compared to counterparts with regular representations that use sampling. Implicit kernels have shown to be beneficial in the space of regular group equivariant networks but have not yet been explored in the context of steerable equivariance. This paper demonstrates how steerable equivariant networks can also be extended to use implicit kernels. - Scientifically The paper extensively validates the method on actual datasets in which more complex symmetries play an important role. The paper extensively validates the approach on both graph structures and dynamical systems, demonstrating relevance in real-world applications and also includes extensive comparison with existing baselines. Weaknesses: - Implicit kernels do not alleviate the need to solve Eq.2 analytically. The method relies on steerable kernels, which require analytically solving Eq. 2 for specific group G of interest. Implicit kernels do not alleviate this, as the constraint of Eq. 2 must still be solved in order to build G-equivariant MLPs. Although these are shortcomings of steerable equivariance and not of the implicit kernels, it should be stressed that implicit kernels do not mitigate these shortcomings. - High computational complexity and memory cost The method only seems to increase computational and memory complexity over already more expensive steerable kernels. - No benefit with increased depth of implicit kernels The paper relies on using multi-layer MLPs as implicit kernels but notices that deeper MLPs do not necessarily lead to improved performance (App. B.3 and App. B.4). The paper does argue that the increased flexibility that implicit kernels offer may benefit long-range dependencies but does not conduct any experiments on this. It would be interesting to demonstrate the benefit of implicit kernels on tasks where deeper kernels help. - Class of implicit functions Is there a reason why only multi-layer perceptrons are considered to parameterise implicit functions? Since the depth of implicit functions has not been found to be beneficial, it could also be interesting to consider different simpler non-linear bases. Even more so, because they appear to be effective in the context of regular group equivariant networks (e.g. sinusoidal/fourier [1], b-splines [2] or simple radial basis features [3]). Or is there a reason why this would not be applicable or less interesting in this setting? [1] Sitzmann, Vincent, et al. "Implicit neural representations with periodic activation functions." Advances in neural information processing systems 33 (2020): 7462-7473. [2] Bekkers, Erik J. "B-spline cnns on lie groups." arXiv preprint arXiv:1909.12057 (2019). [3] van der Ouderaa, Tycho FA, and Mark van der Wilk. "Sparse Convolutions on Lie Groups." NeurIPS Workshop on Symmetry and Geometry in Neural Representations. PMLR, 2023. Technical Quality: 3 good Clarity: 3 good Questions for Authors: a) Am I correct in the assessment that the method does not alleviate the need to solve Eq. 2 analytically and thus does not directly improve the efficiency of steerable kernels in that regard? b) What are computational runtimes? (and what is the explicit overhead of implicit kernels) c) What are memory costs? (and what is the explicit overhead of implicit kernels) Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper extensively compares (externally) with alternative approaches. However, the (intrinsic) analysis of how different types of implicit kernels may affect performance remains limited. One of the main benefits of implicit kernels is the increased flexibility it provides when designing the model. It would therefore be very interesting to get a better idea of how different choices in the functional form of the implicit kernels affect the output or model performance. This point becomes especially relevant since the paper includes negative results on deeper implicit kernels (which nevertheless are interesting findings), as this would help demonstrate how implicit kernels can be beneficial in practice. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for highlighting the scientific and conceptual strengths of the manuscript. We have carefully considered your comments and suggestions and provide a detailed rebuttal below. ### Weaknesses - We want to emphasize that implicit kernels do allow us to alleviate solving Eq.2 by relaxing the constraint to $G$-equivariance of the implicit representation (see Lemma 1). In particular, one does not need anymore to manually construct *ad-hoc* $G$-steerable basis, which otherwise comes from solving Eq.2. To summarize the difference between implicit kernels and standard steerable kernels, we now provide a table (see the PDF attached to the global rebuttal) with key ingredients for both approaches. - We would like to highlight that the complexity of our approach depends entirely on the structure of $G$-MLPs which is defined by the user. In preliminary experiments on MN40, we observed that 1-layer $G$-MLP achieves results better than the non-implicit kernels suggested in [7] while having a comparable number of parameters (approx $2\cdot10^4$). We provide a more detailed analysis in Appendix B.4. - We analyze the effect of depth in the Appendix (see Fig. 5). For the QM9 dataset, and to a lesser extent for MN40, we observe quantitative improvement by increasing the number of layers in $G$-MLP. - We agree with the reviewer that employing our approach on data exhibiting symmetry and long-range dependencies would be an interesting direction, which we leave for further work. Currently, preliminary experiments do not allow us to draw conclusions as long-range dependencies are practically absent in the data we work with. - We would like to emphasize that the depth of $G$-MLP does matter according to the evidence from our experiments. Concerning the choice of $G$-MLP, there is indeed no restriction on using different implicit representations of kernels. However, applying Fourier/B-splines/etc. to achieve $G$-equivariance is similar to crafting a custom $G$-steerable filter basis. In contrast, we intended to develop a more general framework employing generic implicit $G$-MLPs to model general convolution kernels. Nevertheless, we acknowledge the reviewer's suggestion of investigating alternative implicit functions to enhance the computational efficiency of implicit kernels, which could be a promising avenue for future research. ### Questions - No, see point 1 in weaknesses. - We report the training time implicitly in Fig.5, where models are trained for the same number of epochs. While deeper $G$-MLPs are indeed more computationally expensive than non-implicit kernels, 1-layer MLPs achieve comparable run-time. - Let us estimate the computational cost as the amount of memory required to store the model's parameters, activations, and intermediate tensors during forward and backward propagation. According to our evaluation, all models with $G=SO(2)$ applied to MN40 had comparable amounts of parameters ($120$k) and consumed approximately the same amount of memory - $1.5$ MB regardless of the architecture when processing a batch of size $16$. Considering models in the QM9 experiment, the ones with standard steerable kernels had $140$k parameters, 1-layer $G$-MLP - $356$k parameters, 2-layer $G$-MLP - $1.1$M parameters, 3-layer $G$-MLP - $1.2$M parameters. We will include those details in the supplementary material of the camera-ready version.
Summary: This work proposes a novel method for achieving a G-equivariant neural network by implicitly parameterizing the steerable filters by G-equivariant MLP instead of learning them with a steerable basis function. The proposed frame is flexible, and the implicit kernel can also consider the problem context by expanding input to the kernel with problem-related parameters. The work provides theoretical justification for the proposed method and shows the model's effectiveness via empirical study. Strengths: 1. The work proposes a novel method to achieve G-equivariant by implicit parameterization, which provides more flexibility to the G-equivariant model. And empirically shows the benefit of the flexible kernel that can consider the additional problem-dependent features. 2. The text is easy to follow and clearly written Weaknesses: 1. Evaluation: The empirical evaluations are performed on N-body simulations, ModelNet, and the QM9 dataset. Which is different from the previous works [1,2,3]. And this makes it difficult for readers to make a direct comparison between the approaches. And raises the question of whether the performance of the approaches is significantly dependent on the chosen problem. 2. The proposed method utilized G-equivariant MLP for implicit parameterization of the kernel. But the rationale for choosing an implicit steerable kernel over EMLP[4] is not addressed nor included in the evaluations. 1.Steerable CNNs 2.3D Steerable CNNs: Learning Rotationally Equivariant Features in Volumetric Data 3.Clebsch–Gordan Nets: a Fully Fourier Space Spherical Convolutional Neural Network 4.A Practical Method for Constructing Equivariant Multilayer Perceptrons for Arbitrary Matrix Groups Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. The G-equivariant MLPs also require us to solve constraints [1]. And we need to do the same for the steerable methods. How are these two different? 2. The result of Table 1: What is the rationale for choosing a problem where we have non-G-equivariant models outperforming G-equivariant methods? 3. Line: 105, equation 2 seems a little out of context. Even though the relevant work is cited and a little background on how we reach that constraint will make the work self-contained. 1.A Practical Method for Constructing Equivariant Multilayer Perceptrons for Arbitrary Matrix Group Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are delighted that the reviewers acknowledge the novelty and flexibility of our proposed method as well as the clarity of our manuscript. Below, we address the reviewers' concerns point-by-point: ### Weaknesses - We would like to emphasize that each of the approaches mentioned by the reviewer deals with distinct data modalities ([1]: pixel data, [2]: voxel data, and [3]: point cloud data). As our paper primarily focuses on the point cloud data, it is most appropriate to compare it to approach [3], which also utilizes similar datasets: ShapeNet (we employ MN-40) and QM7 (we use QM9). Although our dataset choices differ from this specific case, they are widely adopted and considered standard in current literature (see, for example, [34, 5]). - While the paper did not explicitly address the topic, we will incorporate a discussion in the related work section. The primary rationale behind this decision is that EMLP supports equivariant MLPs but not convolutions, rendering it incapable of directly processing image/volumetric data in the way Steerable CNNs do. Henceforth, we focus on CNNs with consideration for potential extensions to various data modalities. ### Questions - We acknowledge the reviewer's concerns about different types of constraints arising when dealing with Steerable CNNs. To differentiate, we present a summary table (see the PDF file attached to the global rebuttal) outlining the essential components for constructing steerable kernels in our approach compared to the baseline method from [7]. In short, solving the constraint from [1] involves obtaining a $G$-steerable basis, whereas building $G$-MLP does not. - The rationale was to choose a dataset with a symmetry group $G \neq SO(3)$ (for which MN40 suffices since every object is vertically aligned), so we can explore how the choice of $G$ affects the performance of $G$-equivariant models rather than try to achieve the state-of-the-art performance. - We understand the concern regarding the context of equation 2 in line 105. We will revise the presentation in this section in the camera-ready version to provide a more coherent flow. --- Rebuttal Comment 1.1: Title: Response Comment: I thank the authors for their response.
Rebuttal 1: Rebuttal: We appreciate the reviewers’ thoughtful feedback on our submission and have carefully addressed each of the questions raised in separate responses. We are glad to hear that reviewers found the proposed approach of using implicit neural representation to parameterize steerable kernels novel and flexible, with potential benefits in integrating problem-dependent features. Additionally, we appreciate their recognition of the extensive validation and comparisons with existing baselines and the clarity and significance of our work in the context of Steerable CNNs. A recurrent concern was that the difference in hardness of implementation compared to the baseline [7] was unclear. To address the issue, we now provide a table (see the PDF attached) highlighting each method’s key ingredients and the comparative hardness of their implementation. Pdf: /pdf/338bdd2352fe37d1e84f9ea8c8181180addc71be.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
CLIP-OGD: An Experimental Design for Adaptive Neyman Allocation in Sequential Experiments
Accept (spotlight)
Summary: This paper studies adaptive Neymann allocation and proposed an algorithm that achieves expected Neymann regret $\tilde{O}(\sqrt{T})$. I am not an expert in sequential experiment design so my ability is limited to assess the impact/relevance of this paper. Yet, I do consider myself well-versed in the potential-outcomes framework. Strengths: - I found the writing and analyses appear to be rigorous and precise. - The proposed algorithm achieves the Neymann variance in the synthetic experiment. Weaknesses: See "Questions" for more details. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. I understand the authors may wish to provide sufficient background, but it is a bit hard to tell what are existing (and well-known) results and what are new results (except starting from page 6 when the authors introduce the algorithm). 2. I felt Assumption 1 is a bit tricky to decipher: what is the motivation to have the same constant $c$? What are the scenarios where this assumption may break? 3. Have you considered combining CLIP-OGD with an outcome model to get a doubly-robust estimator? 4. Can you add more simulations? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Not applicable Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer 5 (wYBx) We thank you for your thoughtful reading of our paper. We are happy to hear that you found the writing and analysis to be rigorous and precise. Moreover, your comments have helped us improve the paper in places where there were ambiguities or confusion. We respond to the questions raised in your review below. ## Question 1 - Existing and New Results > I understand the authors may wish to provide sufficient background, but it is a bit hard to tell what are existing (and well-known) results and what are new results (except starting from page 6 when the authors introduce the algorithm). Thank you for highlighting this lack of clarity in our writing. Let us summarize the separation between existing work and our main contributions here. - **Existing Work**: The main existing results which are similar in spirit to ours are the works of (Hahn et al 2011) and (Blackwell et al 2022) which propose and formally analyze experimental designs for Adaptive Neyman Allocation. Neither of these works are able to show that the optimal Neyman variance is achieved, even in large samples. In this paper, our main contributions are: - (Section 4.1): The introduction of performance measures which ensure that the optimal Neyman variance is achieved in large samples. - (Section 4.2): A new adaptive experimental design which ensures bounds on the Neyman regret, thereby ensuring that the optimal Neyman variance is achieved. - (Section 5): Methods for valid confidence intervals using this experimental design. There are two additional minor contributions are contained in Section 6: - Proposition 6.1 shows that the "explore-then-commit" style designs proposed by existing work (Hahn et al 2011) and (Blackwell et al 2022) **can not attain optimal Neyman variance** in a design-based framework without very strong homogeneity assumptions. - Proposition 6.2 shows that MAB algorithms which minimize outcome regret are incompatible with achieving optimal Neyman variance. In fact, the variance of MAB algorithms will have convergence rates **larger by orders of magnitude**. The background in Section 2 contains the formulation of design-based inference as well as basic facts about the adaptive Horvitz--Thompson estimator. These basic facts have been established in i.i.d. settings but perhaps not design-based settings, although this does not count as a contribution by any means and we do not mean for it to be interpreted as such. We will add a citation to the i.i.d. literature to make this clear. In order to address your question, we will revise the introduction, related work, and preliminaries sections to clarify these points. ## Question 2 -- Assumption 1 > I felt Assumption 1 is a bit tricky to decipher: what is the motivation to have the same constant $c$ ? What are the scenarios where this assumption may break? We appreciate your comment here, which highlights that Assumption 1 would benefit from a longer discussion which more carefully interprets its scope conditions. To address your comment, we will include in the paper a discussion similar to the one below (although more streamlined): As stated briefly at the end of Section 2, the purpose of Assumption 1 is to ensure that the optimal Neyman variance is bounded away from zero. More precisely, the Assumption guarantees that $V_N = \Omega (1 / T)$ or on a normalized scale, $T \cdot V_N = \Omega(1)$. This is important because without this assumption, it is impossible to construct a sequential experimental design that can achieve a comparable variance. To illustrate this, consider the following example where Assumption 1 does not hold: for all units $i \in [n]$, consider the outcomes $$ y_t(1) = 1 \quad \text{and} \quad y_t(0) = -1 \enspace. $$ Here, $\rho = -1$ so that the outcomes are perfectly negatively correlated and thus Assumption 1 does not hold. Note that the average treatment effect is $\tau = 2$. The optimal Neyman design is to set $p^* = 1/2$. In this case, the Horvitz--Thompson estimator is $\hat{\tau} = 2$ with probability 1 and thus the Neyman variance is zero, i.e. $V_N = 0$. Thus, any sequential design which can select treatment probability $P_t \neq 1/2$ at some time $t$ will have a positive variance, and thus unbounded Neyman ratio. The construction above requires a somewhat pathological set of outcomes -- even a slight perturbation of the outcomes is enough to prevent such pathologies. We remark that Assumption 1 (or similar assumptions) are typically required for efficiently estimating variance and constructing confidence intervals. For these reasons, experimenters are typically comfortable making assumptions like Assumption 1, which rule out pathological edge cases. The fact that the same constant $c$ is used in both parts of the assumption is arbitrary: it can be replaced by two separate constants $c_1$ and $c_2$ and our analysis goes through by taking $c = \min \{ c_1, c_2 \}$. ## Question 3 -- Doubly Robust Estimation > Have you considered combining CLIP-OGD with an outcome model to get a doubly-robust estimator? This is an excellent question! We have not considered such extensions, but they sound very interesting. We are not aware of doubly robust estimation in an adaptive design. A promising direction for future work! ## Question 4 -- More Simulations > Can you add more simulations? Thank you for encouraging us to increase the strength of our simulations. Please see "Global Response" for our discussion on the results of a MAB baseline and additional new simulation results. --- Rebuttal Comment 1.1: Title: Thank you for the reply Comment: I thank the authors for the detailed response and addressing my concerns. I agree that with a more streamlined and detailed discussions on Assumption 1, and with the new results this paper is stronger that it was. --- Reply to Comment 1.1.1: Title: response to reply Comment: Thank you for your response. We are happy to hear that you found that the revisions brought forth in the rebuttals increased the strength of the paper.
Summary: The authors study adaptive allocation of samples into control and treatment groups in an experiment. They (i) define new performance measures for such allocations, namely Neyman ratio and Neyman regret, (ii) introduce a new algorithm called Clip-OGD that achieves optimal Neyman regret, and (iii) provide asymptotically correct confidence intervals for experiments run with Clip-OGD. Strengths: Presentation-wise, this is an extremely well-written paper. In particular, (i) The metrics that are proposed, Neyman ratio and Neyman regret, are well motivated. It is convincing why minimizing the Neyman ratio is desirable and how minimizing Neyman regret is a surrogate for that objective. (ii) Clip-OGD is a simple and intuitive algorithm. Notably, it does not have hyper-parameters that require tuning (although they could potentially be tuned). This is a major strength for an online algorithm where cross-validation using an offline dataset would not be possible. Weaknesses: There are only minor weaknesses (please see my questions below). Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) In the paragraph starting from line 226, Neyman regret is discussed from the perspective of multi-armed bandits. I understand how explore-exploit algorithms like UCB are not suitable for minimizing the Neyman regret; the purpose of adaptive Neyman allocation seems to be purely exploratory. Then, how does it relate to existing pure-exploration bandits? 2) Assumption 2 is not intuitive and is not explained well enough. What does "ruling out settings where the outcomes were chosen with knowledge of Clip-OGD" mean exactly? I understand the formulation assumes $y_0(t)$ and $y_1(t)$ are deterministic, but just to understand Assumption 2, what would its equivalent have been under a super-population assumption (i.e. if $Y_0(t)$ and $Y_1(t)$ were to be random and i.i.d.)? 3) Confidence intervals provided in Section 5 seem to be only asymptotically correct. Does this mean that no finite-sample guarantees are possible regarding the error rate of these intervals? This would limit the use of Clip-OGD in settings where strict error control is required (such as clinical trials). 4) Why do the experiments not include a multi-armed bandit baseline although they are mentioned as an alternative design in Section 6? 5) What does "informal" mean in Proposition 6.1? If it does not have a formal proof, maybe it should be stated as a remark rather than a proposition. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, the authors adequately addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer 4 (etbf) We thank you for your careful reading of our paper and we are happy to hear that you find the paper well-written and its results well motivated. Your comments in this review have been very helpful, as they have identified a few weak points in the paper which we have addressed. ## Question 1: Relation to Pure Exploration Bandits > In the paragraph starting from line 226, Neyman regret is discussed from the perspective of multi-armed bandits. I understand how explore-exploit algorithms like UCB are not suitable for minimizing the Neyman regret; the purpose of adaptive Neyman allocation seems to be purely exploratory. Then, how does it relate to existing pure-exploration bandits? Great question! Although there are many objectives for pure exploration bandits (e.g. simple regret, best arm identification), they are all concerned with selecting or identifying the best arm. In contrast, the objective in causal inference is to estimate the difference between the means of the two arms, to high precision. We are not only interested in *which arm* is the better arm, but rather we are interested in estimating *how much better* is the better arm. This is the average treatment effect. In Proposition 6.2, we showed that a bandit algorithm which minimizes cumulative regret will necessarily incur a large variance of the effect estimator, by orders of magnitude. This shows that cumulative regret and Neyman regret are generally incompatible. We have not investigated whether a similar incompatibility result holds for pure exploration bandits objectives such as simple regret and best arm identification, but we conjecture that the answer is "yes". ## Question 2: Assumption 2 > Assumption 2 is not intuitive and is not explained well enough. What does "ruling out settings where the outcomes were chosen with knowledge of Clip-OGD" mean exactly? I understand the formulation assumes $y_0(t)$ and $y_1(t)$ are deterministic, but just to understand Assumption 2, what would its equivalent have been under a super-population assumption (i.e. if and were to be random and i.i.d.)? Thank you for your comment, which highlights the need for additional discussion following Assumption 2. Due to space considerations, we removed such a discussion. However, we are more than happy to add a brief discussion which addresses these points back in the main body. Please see a longer answer to your question in the "Global Response". ## Question 3: Confidence Intervals > Confidence intervals provided in Section 5 seem to be only asymptotically correct. Does this mean that no finite-sample guarantees are possible regarding the error rate of these intervals? This would limit the use of Clip-OGD in settings where strict error control is required (such as clinical trials). Yes, you are correct that we do not establish any finite sample guarantees for these confidence intervals. We agree that confidence intervals with finite sample guarantees would be more desirable in practice, provided that they would not increase dramatically in width. However, we remark that asymptotic coverage guarantees are the standard in causal inference (space is limited in this rebuttal, but we are happy to provide additional citations during discussion period). In fact, randomized clinical trials today typically use exactly these types of confidence intervals, which only guarantee coverage asymptotically. This is referred to in practice as a "large sample approximation". In this sense, our results are in-line with the literature. Moreover, over additional simulation results in the appendix show that the intervals cover at the nominal level. ## Question 4 -- MAB in Experiments > Why do the experiments not include a multi-armed bandit baseline although they are mentioned as an alternative design in Section 6? Thanks you for raising this point, which encouraged us to update our simulations and strengthened the paper. Please see "Global Response" for our discussion on the results of a MAB baseline and additional new simulation results. Initially, we did not include a MAB baseline because Proposition 6.2 shows that any adaptive design guarantees a cumulative regret of $\mathcal{O}(\sqrt{T})$ must incur a variance of $\Omega(1 / \sqrt{T})$, orders of magnitude larger than even naive Bernoulli. Simulations are consistent with this claim. ## Question 5: Proposition 6.1 > What does "informal" mean in Proposition 6.1? If it does not have a formal proof, maybe it should be stated as a remark rather than a proposition. We recognize that "informal" was a confusing term to use in Proposition 6.1. Initially, the term "informal" was used here because the class of potential outcomes referenced by Proposition 6.1 is directly constructed in the appendix. Moreover, Proposition 6.1 is restated in the appendix with respect to this class. That being said, there is nothing "informal" about Proposition 6.1 from the point of mathematical rigor. To address this comment, we have removed the word "informal" and replaced the sentence after Proposition 6.1 with the following: "The specific class of potential outcomes referenced in Propposition 6.1 is constructed in Appendix E.1".
Summary: The work considers adaptive experimental design for sequential experiments. To do so, a new regret-like measure, called Neyman regret is defined that compares the ratio of the variance under the chosen experiment design with respect to the variance under the optimal experiment design. Drawing connections to online convex optimization literature, an algorithm is developed that provides $\tilde O(\sqrt(T))$ Neyman regret. In contrast, it is also established that two-stage explore then commit or multi-arm bandit setups may not result in (super)linear Neyman regret. Additionally, asymptotically valid confidence intervals using the adaptively collected data are provided. Strengths: S1. An important topic of sequential experimental design, particularly setting up the framework from the point of regret minimization. S2. The core idea is well-presented S3. Comparison of Neyman regret with alternative designs is also valuable. Weaknesses: W1. Numerical simulations that analyzed more aspects of the algorithm could have made the paper stronger. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: A. For the gradient term in the algorithm, it might be beneficial to elaborate on that, i.e., from variance eqn to eqn for estimating that variance and taking the derivative to get the cubic terms. B. I see the need for the projection term in the algorithm, but it is not clear how to actually set it in practice. C. I am curious if the authors can elaborate on the choice of the utility for gradient estimation. Instead of minimizing the utility with just the new sample, why not do a Follow-the-leader style algorithm to minimize the utility over all the samples seen so far? Maybe even FTRL, with a regularizer that prevents the distribution from shifting too far from Bernoulli(0.5) could replace the projection step? D. I think the discussion around non-superefficient variance is important and I while I see the need of it, I do not quite understand it properly. Having more discussion about that could be beneficial. E. What is the practical relevance of the experiment setup for Fig 1(b)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: F. To understand the sensitivity of the projection hyper-parameter, can ablations for it be provided in Fig 1? Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer 3 (2JiC) We thank you for your careful review of our paper and we are happy to hear that you believe the core idea to be well-presented and find the comparison to alternative designs valuable. We are grateful for the questions and concerns you raised in your review, which have strengthened the paper. We respond to your questions and concerns below. ## Weakness W1: Stronger Simulations > W1. Numerical simulations that analyzed more aspects of the algorithm could have made the paper stronger. Thank you for encouraging us to increase the strength of our simulations. We have added additional baselines and stability analyses. Please see "Global Response" for details. ## Question A: Gradient Term > A. For the gradient term in the algorithm, it might be beneficial to elaborate on that, i.e., from variance eqn to eqn for estimating that variance and taking the derivative to get the cubic terms. Thank you for highighting this weakness in our exposition. We will address this by adding a few sentences describing the derivation of the gradient estimator after Algorithm 1. ## Question B: Projection Term > B. I see the need for the projection term in the algorithm, but it is not clear how to actually set it in practice. In Clip-OGD, we set the projection parameter according to a decaying rate, $\delta_t = (1/2) \cdot t^{-1/\alpha}$, where $\alpha$ is a decay parameter chosen by the experimenter. Although $\alpha$ could be arbitrarily chosen, our regret analysis in the proof of Theorem 4.2 demonstrates that choosing $\alpha = \sqrt{5 \log(T)}$ minimizes our derived upper bound on the expected Neyman regret. In this sense, we recommend that experimenters choose $\alpha = \sqrt{5 \log(T)}$ in practice. Note that the experimental data is only collected once so that it is not possible to choose $\alpha$ using cross-validation or a similar statistical method. ## Question C: Gradient Estimation > C. I am curious if the authors can elaborate on the choice of the utility for gradient estimation. Instead of minimizing the utility with just the new sample, why not do a Follow-the-leader style algorithm to minimize the utility over all the samples seen so far? Maybe even FTRL, with a regularizer that prevents the distribution from shifting too far from Bernoulli(0.5) could replace the projection step? This is a great question! We tried an approach like this, but had difficulty analyzing its properties. On one hand, you need to allow the algorithm to be able to select $p_t$ arbitrarily close to $0$ or $1$ as $T$ grows, which means that penalizing the distance from Bernoulli(0.5) is challenging to implement correctly. On the other hand, an approach that does not explicitly constrain $P_t$ to be in the open interval $(0,1)$ might have the propery that the $P_t = 1$ or $P_t = 0$ at some round with non-zero probability, which would introduce bias in the Horvitz--Thompson estimator. An interesting open question is to achieve sublinear expected Neyman regret *without* using a projection step. We conjecture that this could shave off the sub-polynomial factor from our bounds on the expected Neyman regret and achieve a clean $\mathcal{O}(\sqrt{T})$. ## Question D: Assumption 2 > D. I think the discussion around non-superefficient variance is important and I while I see the need of it, I do not quite understand it properly. Having more discussion about that could be beneficial. Thank you for your comment, which highlights the need for additional discussion following Assumption 2. Due to space considerations, we removed such a discussion. However, we are more than happy to add a brief discussion which addresses these points back in the main body. Please see "Global Response" for an interpretation of Assumption 2. ## Question E: Fig 1(b) > E. What is the practical relevance of the experiment setup for Fig 1(b)? Thank you for raising this question, which highlights a weakness in our exposition. We will update Section 7 with a brief discussion on the practical relevance of the experimental setup for Fig 1(b). Here is a longer discussion below: Practically speaking, Fig 1(b) represents a possible scenario where the first 100 people in the micro-economic experiment actually invest *less money* in machinery or equipment if they receive the macro-insurance intervention, while the remaining 14,435 people will invest *more money* given the macro-insurance. This sort of scenario is not unreasonable -- perhaps the first 100 people recruited into the trial were systematically different from the rest, i.e. from a well-connected part of the entrepreneurial community. Explore-then-Commit (ETC) experimental designs -- which are the predominant existing method proposed by (Hahn et al 2011) and (Blackwell et al 2022) -- heavily rely on the first units arriving being representative of later units. The simulation results in Fig 1(b) demonstrates that these methods can actually increase the variance, while Clip-OGD does not suffer from these issues.
Summary: This paper studies the problem of “Adaptive Neyman Allocation”, which involves designing an efficient, adaptive experimental design. Neyman allocation is an infeasible experimental design which would be optimal (minimum variance) if the planner knew all the exact potential outcomes under different treatments. However, this is infeasible, and so the goal considered in the paper is to build an adaptive experimental design which is nearly as efficient (in terms of variance) as the infeasible non-adaptive Neyman allocation asymptotically. To measure the performance, the first contribution of the paper is to propose new measures of regret (and regret ratio) similar to the notion of regret in bandits/statistical learning. Second, the paper proposes an adaptive design based on the idea of adaptive gradience descent to adjust the treatment probabilities to minimize regret. The paper shows that the regret in this approach scales as O(sqrt(T)). Finally the paper constructs confidence intervals which guarantee asymptotic coverage of the average treatment effect. Strengths: – Novelty: The paper claims to be the first to introduce the notion of Neyman regret in the context of adaptive experimental designs. I am not fully aware of the related literature, but this seems to be an interesting contribution to analyze from this perspective. – Well-written: The paper is well organized overall and the concepts are introduced and explained crisply. – Theory: The paper presents results well-grounded in theory. Weaknesses: 1. Related Work: The related work section mainly talks about Neyman allocation and about casual inference under adaptively collected data. However, it seems there is also a large body of work that studies adaptive experimentation/ adaptive design for randomized trials. All of these references seem to be missing (please see few examples below and references therein)? It would be good to distinguish this paper from this body of related work. -- Eggenberger, Florian, and George Pólya. "Über die statistik verketteter vorgänge." ZAMM‐Journal of Applied Mathematics and Mechanics/Zeitschrift für Angewandte Mathematik und Mechanik 3, no. 4 (1923): 279-289 -- Xu, Yanxun, Lorenzo Trippa, Peter Müller, and Yuan Ji. "Subgroup-based adaptive (SUBA) designs for multi-arm biomarker trials." Statistics in Biosciences 8 (2016): 159-180. -- Eisele, Jeffrey R. "The doubly adaptive biased coin design for sequential clinical trials." Journal of Statistical Planning and Inference 38, no. 2 (1994): 249-261. -- Hu, Feifang, and William F. Rosenberger. "Optimality, variability, power: evaluating response-adaptive randomization procedures for treatment comparisons." Journal of the American Statistical Association 98, no. 463 (2003): 671-678. 2. Unsurprising: To me it is a little unsurprising/ unimpressive that the approach can achieve the optimal data efficiency as T tend to infinity (after a really large number of samples). The authors also seem to acknowledge this in part in the final section of the paper. 3. Empirical Evaluation: It may be useful to add more interesting baselines if available in comparison of the regret Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: – In section 4.1 why isn’t the variance of adaptive experimental design, V also a function of t? – There seem to be a few other approaches that study adaptive randomziation in clinical trials. For example: Zhang, Lanju, and William F. Rosenberger. "Response‐adaptive randomization for clinical trials with continuous outcomes." Biometrics 62.2 (2006): 562-569. How does the proposed approach compare against existing methods? – Ethical concern: Please see additional question under Limitations section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Could there be fairness/social welfare concerns stemming from such optimal designs? Perhaps the allocation algorithm may assign a higher probability to a less effective or detrimental treatment simply because the variance in its outcomes is higher. As a result, in the quest for minimized variance, could the negative treatment be administered more often than advisable/necessary? Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer 2 (NsFr) We thank you for your thoughtful review of our paper and we are glad to hear that you found our exposition crisp and the results well-grounded in theory. We are grateful for the comments, questions, and concerns raised in your review: they have highlighted weak points in our paper, which we have worked to address. As a result, we believe the paper has been greatly strengthened. We are limited for space in our rebuttal, but happy to expand upon any points in the discussion period. We hope you find that many of these have been addressed. ## W1: Related Work > The related work section mainly talks about Neyman allocation and about casual inference under adaptively collected data. However, it seems there is also a large body of work that studies adaptive experimentation/ adaptive design for randomized trials. All of these references seem to be missing (please see few examples below and references therein)? It would be good to distinguish this paper from this body of related work. We thank you for providing us with additional references so that we can better contextualize our results within a broader literature. We have read these papers (some of them familiar, some of them not) as well certain references therein and prepared a (longer) appropriate survey which better distinguishes our paper from this body of work. To address this point raised in your review, we will include these papers into our literature review and better differentiate our work from the papers you raised. Our problem setting is distinct from this literature in two ways. The first distinction is the goal of the adaptive design. In the references above, there are a variety of goals from reducing sample size, to minimizing "harm", to identifying relevant subgroups. In contrast, our work focuses on the goal of minimizing the variance of the sequential Horvitz--Thompson estimator. The second distinction is the inferential frameworks. While the references you listed make strong independence and parametric assumptions (which simplify their problems), we take a design-based perspective, requirigin neither i.i.d. nor parametric assumptions on the outcomes. ## W2: Unsurprising > To me it is a little unsurprising/ unimpressive that the approach can achieve the optimal data efficiency as T tend to infinity (after a really large number of samples). The authors also seem to acknowledge this in part in the final section of the paper. We respectfully disagree; the state of prior work suggests that the main results presented in the paper are surprising and relevant to the design-based inference community. The most closely related work of (Hahn et al 2011) and (Blackwell et al 2022) for the same problem of Adaptive Neyman Allocation was not able to establish that the optimal Neyman variance was recovered in large samples -- in fact, we show in Proposition 6.1 that the adaptive designs presented in those papers *provably cannot* obtain the Neyman variance in large samples. Experimental results (Fig 1b) corroborate this theoretical analysis. Our main results are the first to provably achieve this goal of Adaptive Neyman Allocation. One of the insights of our work (which may be surprising to the design-based causal inference community) is that sophisticated algorithmic techniques seem to be required to achieve optimal Neyman variance, as the problem is equivalent to an adversarial online convex optimization problem. ## W3: Empirical Evaluation > It may be useful to add more interesting baselines if available in comparison of the regret. We have included two additional baselines. Please see "Global Response" for details. ## Q1: Variance Derivation > In section 4.1 why isn’t the variance of adaptive experimental design, V also a function of t? We believe there is a slight misunderstanding here -- you are absolutely correct that the variance of the adaptive experimental design $V$ will be a function of $T$. We write $V = (1 + \kappa_T ) \cdot V_N$, where $\kappa_T$ is the Neyman ratio and $V_N$ is the optimal Neyman variance. Both of these quantities depend on $T$. We are happy to further clarify if this is unclear. ## Q2: Comparison to Existing Method > There seem to be a few other approaches that study adaptive randomziation in clinical trials...How does the proposed approach compare against existing methods? Thank you for this additional reference. The recommended adaptive design in this paper is the Doubly Biased Coin Design (DBCD), which we have included in simulations (see "Global Response"). ## Q3: Ethical Concerns > Could there be fairness/social welfare concerns stemming from such optimal designs? Perhaps the allocation algorithm may assign a higher probability to a less effective or detrimental treatment simply because the variance in its outcomes is higher. As a result, in the quest for minimized variance, could the negative treatment be administered more often than advisable/necessary? Great question! We have written an additional section for the appendix that addresses your point, which we are happy to share in the discussion phase. The highly influential 1979 "Bellman Report" writes that "even avoiding harm requires learning what is harmful...Learning what will in fact benefit may require exposing persons to risk". From this perspective, it may be ethically advisable to use a variance minimizing adaptive design because it allows the researcher to learn the effect while subjecting fewer human subjects to the experimental treatments. On the other hand, an adaptive treatment plan which minimizes cumulative regret will ensure that minimial harm is done to subjects in the experiment, but will offer less certainty about the extent of the benefit or harm of the treatments. Such an approach will lead to less informative generalizable knowledge of treatment effects, possibly defeating the goal of the research study and being, as a result, more unethical. --- Rebuttal Comment 1.1: Comment: Hello Reviewer NsFr, Were your concerns regarding the novelty (surprise) addressed by the authors? I have reviewed the related works you suggested in the context of this paper, and while they are relevant, I do not believe their omission warrants rejection alone. Given the authors' efforts to bulk up the related work, are your concerns addressed?
Rebuttal 1: Rebuttal: # Global Response We thank the five reviewers for their careful reading of our paper. We are happy to see that that the reviews were overall positive. Moreover, we are very grateful for the critiques and questions from reviewers that revealed certain weaknesses in our submission. We believe that the paper has been greatly strengthed as a result of this feedback. We respond to each reviewer individually, but we list the main themes of the revisions here: - **Simulations**: More simulations have been added, including a comparison to EXP3 and Doubly Biased Coin Design (DBCD) and an investigation into the stability of Clip-OGD relative to the step size. - **Interpretation of Results**: We add additional discussions interpreting the assumptions and elaborating on the derivation of the algorithm. - **Literature Review**: The literature review has been expanded and there are more careful comparisons to related prior work. - **Ethical Considerations**: We have drafted a new section in the appendix that discusses ethical considerations grounded in The Bellmont Report. We elaborate on some of these points below. ## Simulations We thank the reviewers for encouraging us to strengthen our simluation results. We have added two additional baselines for our simulations: EXP3 algorithm for outcome regret minimization and the Doubly Biased Coin Design (DBCD), as described in Section 6 of Eisele (1994). See Figure 2 in the attached pdf for their performance. We find that they suffer from variance which is orders of magnitude larger than what is achieved by Clip-OGD and ETC. This is perhaps unsurprising because these adaptive allocation mechanisms are designed for different purposes. The problem of (variance minimizing) Adaptive Neyman Allocation is relatively understudied so that we are unaware of additional meaningful baselines other than ETC. We are happy to include these additional comparisons in the revised paper. In addition, we have tested the stability of Clip-OGD to the step size, which can be found in Figure 1 of the attached pdf. We tried various step sizes of the form $\eta = c / \sqrt{T}$ for $c \in \{ 0.25, 0.5, 1.0, 2.0, 4.0 \}$. The original Clip-OGD was only run with $c = 1$. We find that smaller step sizes improve convergence rates, effectively removing the "overhead of adaptivity" in this example. However, because the randomized experiment can only be run once, experimenters will typically not be able to try many step sizes. While it remains an open question about how to select a step size which best mitigates the "overhead of adaptivity", our recommendation of $\eta = 1 / \sqrt{T}$ still maintains good convergence properties. ## Interpretation: Assumption 2 Several reviewers asked for additional interpretation behind Assumption 2, which we provide below. The Neyman regret is a quantity who expectation compares the variance of an adaptive design to the optimal Bernoulli design. We show that when using Clip-OGD, we can bound the expected Neyman regret by $\mathbb{E}[\mathcal{R}_T] \leq \tilde{\mathcal{O}}(\sqrt{T})$. However, the expected Neyman regret could be *negative* if the adaptive design achieves variance which is strictly smaller than the best Bernoulli design. This seems unlikely to happen for "typical" outcomes, but it is not impossible. In fact, an adversary could construct outcomes on which the optimal Bernoulli design achieves a variance like $V_N = \Omega(1/T)$, while *Clip-OGD specifically* will achieve a variance with a better rate, i.e. $V = \mathcal{O}(1/T^2)$. In this case, $\mathbb{E}[\mathcal{R}_T] \approx - T$. Note that this does not contradict the original regret bound that $\mathbb{E}[\mathcal{R}_T] \leq \tilde{\mathcal{O}}(\sqrt{T})$. Such an adversary would have to have detailed knowledge of the adaptive design and so this construction of outcomes seems like a pathological edge case. It is exactly this pathological edge case that Assumption 2 rules out. Assumption 2 is very likely to hold automatically in an i.i.d. setting precisely because the potential outcomes are drawn by a distribution independently of treatment assignment. In particular, the potential outcomes cannot be chosen by an adversary with knowledge of the experimental design. We suspect that Assumption 2 would not be necessary in an i.i.d. setting, but proving this seems beyond the scope of the current paper. Pdf: /pdf/fd37cb5ad8fe76b5415e9e4abdee671f9dc65584.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes a new adaptative Neyman allocation for experimental design. The proposed adaptative design gets close to the optimal non-adaptative strategy without suffering from the same infeasibilities. Strengths: 1) The writing and structure of the paper are OK. 2) The topic is very relevant, and the paper proposes adaptative variance with formal guarantees ensuring feasibility is interesting. Weaknesses: 1) The paper is not easy to follow for the non-expert, both concerning the notations used and the derivations. 2) It seems that the motivation for this work could be broader. It is unclear to the reviewer why the authors focused on medical applications. 3) The method is only applied to a single microeconomic example. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1) I would like to see some discussion about the medical applications vs a more broader motivation. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer 1 (xwuB) Thank you for your careful reading of our submission. We are glad to hear that you find the topic relevant and the formal guarantees interesting. Below, we respond to the specific critiques raised in your review. ## Broader Movtivations > It seems that the motivation for this work could be broader. It is unclear to the reviewer why the authors focused on medical applications...I would like to see some discussion about the medical applications vs a more broader motivation. Although clinical trials are a relevant application, we agree that the statistical methods in this paper apply more broadly to a variety of domains. Some motivating examples of randomized experiments in other disciplines include: - **International Development Economics**: International Development Economics (IDE) is concerned with determining which interventions (typically by NGOs or state governments) increase economic outcomes in developing nations. In the past 15 years, the introduction of randomized experiments (as opposed to relying solely on economic theory) has been a major development is figuring out "what works" (see, e.g. Banerjee et al 2015, Duflo 2012). In fact, Esther Duflo, Abhijit Banerjee, and Michael Kremer won the 2019 Nobel Prize in Economics for their foundational role in this line of work. IDE experiments with short-term outcomes can be implemented in a sequential manner. - **Email Marketing**: Nearly all companies that operate online use email marketing campaigns (see, e.g. Ellis-Chadwick & Doherty 2012). A common type of experiment is to test which personalization strategy will improve a certain outcome, e.g. a customer clicking a link. More generally, experiments like this are frequently run at tech companies to guide product development. These experiments typically have very quick response times and so can be made to run sequentially. - **Quantitative Political Sciences**: There has been a recent revolution in political science to incorporate more principled causal inference methods. A seminal work in this area is Gerber & Green (2000), which invesitgates the effects of canvassing on voter turn-out. More recently, Offer-Westort et al 2021 run a sequential experiment to test various interventions for reducing partisan bias. Our work shows that in settings where sequentially running the experiment is possible, fewer people are needed in the experiment to detect an effect, thereby saving money and resources. In order to address your comment, we will update the introduction to better highlight these applications beyond medical trials, thereby broadening the motivation for this work. - Banerjee, Abhijit, Esther Duflo, Rachel Glennerster, and Cynthia Kinnan. 2015. "The Miracle of Microfinance? Evidence from a Randomized Evaluation." American Economic Journal: Applied Economics, 7(1): 22-53. - Esther Duflo. “Women’s Empowerment and Economic Development”, Journal of Economic Literature, Vol. 50, No. 4: 1051-79, December 2012. - Ellis-Chadwick, Fiona and Doherty, Neil F. (2012). Web advertising: the role of email marketing. Journal of Business Research, 65(6) pp. 843–848. - Gerber, Alan S., and Donald P. Green. “The Effects of Canvassing, Telephone Calls, and Direct Mail on Voter Turnout: A Field Experiment.” The American Political Science Review, vol. 94, no. 3, 2000, pp. 653–63. - Molly Offer-Westort, Alexander Coppock, and Donald P. Green. "Adaptive experimental design: Prospects and applications in political science." American Journal of Political Science, 65(4): 826–844, 2021. doi: 10.1111/ajps.12597. ## Readability and Derivations > The paper is not easy to follow for the non-expert, both concerning the notations used and the derivations. Other reviewers (2JiC, etbf, wYBx) asked for further interpretation of Assumptions 1 and 2, as well as additional discussion into the derivation of the main algorithm, Clip-OGD. Please see our responses to their reviews for more details on how we addressed these points. We hope that these additional discussions on assumptions and derivations improve readability for a general audience. We agree that the notation used in design-based causal inference can be sometimes cumbersome, especially relative to general statistics or machine learning. We strive to make our paper as readable to as wide of an audience as possible, but we find value in sticking to conventional notation.
null
null
null
null
null
null
Structured Federated Learning through Clustered Additive Modeling
Accept (poster)
Summary: In this paper, the authors study heterogeneous federated learning, an important problem in federated learning, where the goal is to leverage the collective intelligence of multiple clients with diverse data distributions, features, or models to train a global model that can generalize well across all clients' data. In particular, the authors base their method on clustered federated learning and propose Clustered additive modeling to deal with the clustering collapse and dynamically changing models. The proposed method involves training a shared global model on top of cluster-specific models, and prediction results are obtained by combining the outputs from both the global model and the associated cluster model. Empirical results on two well-known datasets show the effectiveness of the proposed method. Strengths: Strength 1. The paper writing is clear and easy to follow 2. The empirical performance seems promising Weaknesses: Weakness: 1. Comparison with other methods under non-iid data setting needs to be improved. 2. Ablation study on structural clustering needs to be improved. 3. Fed-CAM introduces several hyper-parameters, including the number of rounds for warm-up, and the number of cluster size K. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Questions for authors: 1. It is unclear which part of the structural clustering leads to the final good performance, as there are several hyper-parameters and assumptions in cluster generation. For example, 1) How does the number of rounds for warm-up affect the quality? 2) Can we dynamically adjust the cluster size K based on some coarse-to-fine heuristic? 3) Can the proposed method be generalized to handle graph data? 2. More existing methods could be compared under more non-iid settings as well as iid settings. For example, 1) more Distribution-based label imbalance setting, feature skew case, and label & feature skew case 2) comparison with knowledge transfer-based methods 3. The authors do not include sufficient discussions about the communication and computational cost, which are suggested to be included. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Clustering-based FL approaches have a common limitation that is how to determine the cluster size. Some analysis on this would be desirable in the revision. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback regarding our 'clear writing' and 'promising empirical performance.' Below, we provide our responses to the weaknesses and questions you mentioned in relation to this paper. ***Impact of warmup rounds*** Please refer to the general response. ***Extra cost of CAM*** Please refer to the general response. ***Chosen of Cluster Size $K$*** The hyperparameter of cluster size $K$ is not introduced by CAM. It is from the original clustered FL methods. On one side, experiments of $K=5,10$ have been conducted under the ground truth of $K=10$. On the other side, While CAM can be an add-on to improve the performance of existing clustered FL methods, chosen of cluster size $K$ is also an independent branch to improve the performance. By the coarse-to-fine heuristic, we can set criteria, such as the loss difference between the local model and cluster model, to determine whether some clients need to be separated from a cluster or more clusters are needed. ***More Non-IID Settings, Datasets and Baselines*** We have compared the baselines under four non-IID settings, including label imbalance and label skew, and will involve feature skew in future works. CAM can be generalized easily to existing FL methods handling graph data as an add-on. And we will involve more knowledge transfer-based methods in future works. We have tried to address most if not all concerns. Please let us know if you have any further questions. Thank you! Best, Authors --- Rebuttal Comment 1.1: Comment: Thanks for the clarificaiton. I am satisfied with the responses and revisions. --- Reply to Comment 1.1.1: Comment: Thank you very much for confirming our work and raising the score!
Summary: This paper studied the problem of clustered federated learning. It analyzed the limitations of existing clustered FL algorithms, including clustering collapse and missing shared knowledge among clusters. Then this paper introduced a novel Clustered Additive Modeling (CAM) framework for learning both a globally shared model and cluster-wise models. A Fed-CAM algorithm was proposed to train CAM model parameters iteratively. The experimental results showed that CAM outperformed existing clustered FL algorithms in prediction accuracy. ####################################### I appreciate the detailed response from the authors. However, my major concerns regarding the overall contributions still remain. Therefore, I keep my scores unchanged. Strengths: The strengths of this paper are summarized below. (1) It proposed a novel Clustered Additive Modeling (CAM) framework for clustered federated learning. Compared to previous algorithms, CAM leveraged a global model to discover the shared knowledge among clusters. (2) It demonstrated the flexibility of CAM framework by incorporating CAM with two popular clustered FL methods, thus leading to IFCA-CAM and FeSEM-CAM models. Then it provided a unified Fed-CAM optimization method for iteratively updating the clustering and global model parameters. (3) Experimental results on both cluster and client-wise non-IID settings showed that compared to vanilla IFCA and FeSEM, IFCA-CAM and FeSEM-CAM could achieve better prediction performance in local clients. Weaknesses: (1) The rationale behind the proposed CAM framework lacks sufficient justification. As illustrated in the introduction section, existing cluttered FL algorithms suffers from clustering collapse, fragility to outliers, and sensitivity to initialization. However, the techniques regarding how CAM addresses these limitations are not explained. (1-1) CAM introduced a global model to mitigate the lustering collapse. As shown in Subsection 3.3, it requires that global and cluster-wise models learn complementary knowledge. One concern is whether the global knowledge shared by all clients always exists in any non-IID FL settings. Furthermore, line 49 states that “different tasks or domains can benefit from sharing low-level or partial representations”. A natural question is whether it is more reasonable to learn partial representations (e.g., common low-level model parameters in FedRep [24]) for discovering the globally shared knowledge among clusters. In addition, the sensitivity of complementarity relationship between global and cluster-wise models can be analyzed with respect to warm-up strategies. (1-2) This paper stated that the outlier clients have less impact on the global model, thus enabling the less vulnerability of CAM. But the outliers might have less common knowledge with other clients/clusters. Besides, based on the update rules in Eqs. (9)(14), the outliers can also dominate the training of global model, if they have a significantly large number of training samples. (1-3) It is more convincing to quantitively evaluate the sensitivity of CAM to initialization, e.g., showing the clustering results with different random initializations in the experiments. (2) The optimization of IFCA-CAM and FeSEM-CAM is confusing. Algorithm 1 updates the model parameters \theta_i and \theta_i^0 (lines 5-6) separately. This might lead to sub-optimal solutions of local objective functions, compared to optimizing \theta_i and \theta_i^0 at each training epoch simultaneously. In addition, it is much more efficient to update \theta_i and \theta_i^0 together within E training epochs. (3) Though this paper focuses on clustered FL scenarios, it is better to compare the proposed CAM framework with other related FL frameworks. This is because learning the shared knowledge with a global model is a common strategy for personalized FL. For example, when each client is considered as a single cluster, the objective function of CAM is similar to [ref 1] and [ref 2]. In addition, the state-of-the-art pFL baselines based on feature sharing can be included to validate the performance of CAM. (4) The convergence analysis of CAM is provided based on the assumption that clustering remains stable during training. But the results in subsection 5.2 might not validate the clustering stability. This is because Figures 1&3 provides the clustering results using only cluster size. It is not guarantee that clients in each cluster remains the same during training. More quantitively evaluations could be provided to verify the quality and stability of clustering in CAM. [ref 1] Deng, Yuyang, Mohammad Mahdi Kamani, and Mehrdad Mahdavi. "Adaptive personalized federated learning." arXiv preprint arXiv:2003.13461 (2020). [ref 2] Mansour, Yishay, Mehryar Mohri, Jae Ro, and Ananda Theertha Suresh. "Three approaches for personalization with applications to federated learning." arXiv preprint arXiv:2002.10619 (2020). Technical Quality: 2 fair Clarity: 3 good Questions for Authors: (1) It shows that CAM tends to learn balanced clusters and determine the number of clusters automatically. Thus, CAM might perform well when the ground-truth of client clustering is balanced. But when the ground-truth of client clustering is highly skewed, would the balanced clusters learned by CAM result in sub-optimal solutions? (2) Why is there a constant 1/m in Eq. (9) for updating global model parameters? (3) In Eq. (13), what is the notation “\eta”? If it denotes the learning rate, why is it only applied to the first term of Eq. (12) with classification loss? (4) In line 176, should it be “Eq. (13)” instead of “Eq. (3)” regarding the K-means regularization term? (5) Why do IFCA-CAM and FeSEM-CAM use different warmup strategies? Is the selection of warmup strategy related to the clustering methods (e.g., minimum loss values in IFCA, K-means over parameters in FeSEM) in FL? (6) What is FedAlt in line 217? (7) In FL system setting, would the hyper-parameter \lambda be selected based on the best performance over the validation set? If so, how are the train/validation/test sets splits in the experiments? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your suggestions and positive comments regarding our 'novel framework,' 'flexible framework,' and 'better empirical performance.' Below, we provide responses to the weaknesses and questions in your comments. Should you have additional questions, we are readily available for further discussion. ### Weakness Clarifications ***W1 (1-1) One concern is whether the global knowledge shared by all clients always exists in any non-IID FL settings?*** As a fundamental assumption of most FL methods, we assume that there always exists some shared global knowledge. This holds true in practice since FedAvg usually sets up a reasonably good baseline. Most non-IID FL works address non-IID challenges by finetuning a personalised model based on a global model. The raised concern is out of the scope of FL research. ***W1 (1-2) the outliers can also dominate the training of global model, if they have a significantly large number of training samples*** Our method CAM can handle outliers as long as they are the minority of the training set. If the outliers become the majority, they are not outliers anymore. ***W1 (1-3) It is more convincing to quantitively evaluate the sensitivity of CAM to initialization, e.g., showing the clustering results with different random initializations in the experiments.*** We already reported the results with different random initializations in Table 1-2: the "Test results (means$\pm$std)" are based on 5 random seeds. Moreover, as demonstrated by the low stand deviation, integrating CAM with existing clustered FL methods (e.g. IFCA and FeSEM) can enhance the stability. We will further clarify it in the next version. ***W2 Why warmup and alternative optimization?*** Both our convergence analysis and experiments demonstrate CAM's advantages over existing methods when using the warmup strategy and alternative optimization. In Fed-CAM algorithm, $\theta_i$ and $\theta_i^0$ are two different models that need to be optimized separately in parallel. They contribute to cluster-specific models and global models respectively. Updating $\theta_i$ and $\theta_i^0$ together without warmup degrades the performance as shown in Table 1 of the attached PDF in the general response. Moreover, it doubled the parameters to be trained at the same time, which is inefficient. ***W3 Similarity to APFL and MAPPER?*** The methods of APFL [ref1] and MAPPER [ref2] interpolate between global and local model parameters, while CAM adds the outputs of global and cluster models as the final predictions. These are very different model architectures leading to different learning algorithms. We compared to pFL methods such as finetuning in Section 2.2 of Supplementary Material and we will add more in the next version. ***W4 Clustering stability analysis*** The clustering converges fast as shown in Fig 1 of the attached PDF in the general response: Clients belonging to each cluster remain the same after a few global rounds. ### Question Clarifications ***Highly-skewed clustering analysis*** Fig 2-3 of the attached PDF in the general response show that CAM still performs well under highly-skewed clustering. ***Warmup strategy*** IFCA and FeSEM need different warmup strategies because they are initialized in different ways. The rule of thumb is to train the uninitialized models among the global model $\Theta_g$ and cluster models $\Theta_{1:K}$ by warmup stage. For example, IFCA initializes $K$ cluster models so IFCA-CAM's warmup needs to prepare a good global model. In contrast, for FeSEM initializing with $m$ client models whose aggregation already defines a global model, FeSEM-CAM's warmup should prepare $m$ client models for a better clustering. ***Dataset partition and chosen of $\lambda$*** We split each client's local dataset into disjoint training, validation and test sets with a ratio of $6:2:2$ and tune $\lambda$ based on the validation set performance. ***Typos*** We will correct typos raised in Questions 2, 3, 4, and 6. FedAlt was proposed in the same paper as FedSim. ***Limitations*** We have discussed the limitations of our methods in the later paragraph of Introduction. [ref 1] Deng, Yuyang, Mohammad Mahdi Kamani, and Mehrdad Mahdavi. "Adaptive personalized federated learning." arXiv preprint arXiv:2003.13461 (2020). [ref 2] Mansour, Yishay, Mehryar Mohri, Jae Ro, and Ananda Theertha Suresh. "Three approaches for personalization with applications to federated learning." arXiv preprint arXiv:2002.10619 (2020). Please let us know if you have any further questions. If our responses have sufficiently addressed your concerns, it would be great if you are willing to reconsider your score kindly. Thank you! Best, Authors --- Rebuttal Comment 1.1: Title: Thank you for your response! Comment: I appreciate the detailed response from the authors. However, my major concerns still remain. First of all, I respectfully disagree with "W1-1: The raised concern is out of the scope of FL research". When you apply a FL algorithm in practice, there's never guarantee that there's global knowledge shared by all clients, especially in the cross-device scenario, or when there are malicious clients. Second, for W1-2, there might be very few outlier clients, but each of them could have a large number of examples. Third, overall, this seems like "yet another" FL algorithm. The idea of combining the global model and the cluster-wise models seems interesting. However, the lack of theoretical insights significantly affects the contributions. Yes, the authors provided proof regarding the convergence. But another major theoretical aspect (arguably maybe more important) not discussed in the paper is when (under what conditions) the proposed algorithm would perform better than clustered FL, not just demonstrated by empirical results. Without this kind of theoretical results, I would consider the overall contributions underwhelming. --- Reply to Comment 1.1.1: Comment: Thanks for your reply! We would like to further address your concerns. You asked good questions, but they are not the problem studied in our paper. It is not realistic to completely solve all these problems in one paper. We focus on clustered federated learning and especially the clustering collapse problem, not how to deal with malicious/outlier clients or whether to apply federated learning (or global knowledge sharing). Although our method can mitigate these two problems as its byproducts, they are not what we mainly aim to solve in this paper. More detailed discussion: - It is not "Federated Learning" but "local SGD" if no global knowledge can be shared across clients. In practice, it is possible that some (malicious) clients do not share any knowledge with the others. Our model is able to capture this case by learning a local cluster model for such clients whose output dominates the global model output in the additive prediction, so they share nearly zero knowledge with others. Moreover, existing methods can lower these clients' importance or remove them entirely [ref 1-3] and they can be seamlessly integrated into our CAM framework. - Outlier clients with a large number of examples can be addressed by applying equal weights to all clients, which is commonly used in popular FL methods [ref 4-6] with thousands of citations. Even without using equal weights and when the global model is dominated by such an outlier, our method can learn cluster models for other non-outlier clients and let their outputs dominate the global model output in the additive prediction, hence robust to such outliers. - We have clarified that our method is fundamentally different from APFL or MAPPER or other FL methods: they compute a sum of global/local model parameters while our model computes a sum of global/local model outputs. Besides the convergence analysis, in our introduction and experiment sections, we provided a thorough analysis and empirical evidence explaining why adding CAM to existing clustered FL is critical to overcome their current shortcomings. [ref 1] Sun, Ziteng, et al. "Can you really backdoor federated learning?." arXiv preprint arXiv:1911.07963 (2019). [ref 2] Bagdasaryan, Eugene, et al. "How to backdoor federated learning." International conference on artificial intelligence and statistics. PMLR, 2020. [ref 3] Wang, Hongyi, et al. "Attack of the tails: Yes, you really can backdoor federated learning." Advances in Neural Information Processing Systems 33 (2020): 16070-16084. [ref 4] Li, Tian, et al. "Federated optimization in heterogeneous networks." Proceedings of Machine learning and systems 2 (2020): 429-450. [ref 5] Karimireddy, Sai Praneeth, et al. "Scaffold: Stochastic controlled averaging for federated learning." International conference on machine learning. PMLR, 2020. [ref 6] Ghosh, Avishek, et al. "An efficient framework for clustered federated learning." Advances in Neural Information Processing Systems 33 (2020): 19586-19597.
Summary: This paper proposes a new federated learning with a specified structure for the prediction produced for each client, called "clustered additive modelling", which adds the prediction of a global model to the prediction of the model trained on a local cluster of clients. It is a modification to existing clustered federated learning methods and fixes several critical issues of these methods such as clustering collapse, fragility to outlier clients, and sensitivity to initialization. Some of the issues are fundamental and widely exist in many existing methods. The paper shows that the proposed CAM can effectively address these issues in different clustered FL methods under different non-IID settings, consistently leading to a promising improvement. Strengths: 1. This tackles a fundamental problem in clustered FL, i.e., clustering collapse. This problem might be widely observed but has not been well investigated and effectively solved before. Hence, I believe this work can be helpful and have a broad impact on FL problems studying statistical heterogeneity and structures among clients. 2. The motivation of this paper is clear. The solution provided by clustered additive modelling is principal, easy to understand, effective across different settings, and generalizable to various existing FL methods. 3. The proposed algorithms in Section 3.1 and 3.2 are intuitive and convincing with detailed explanations per step and theoretical guarantee (convergence analysis). I appreciate the authors for providing two examples of CAM algorithms following the same high-level idea. 4. The experiments conducted on 2 datasets x 4 non-IID settings show great advantages of the proposed method over existing FL and clustered FL methods. The analysis of clustering collapse also provides a nice explanation and empirical evidence for the improvement. 5. The code is provided for high reproducibility. Weaknesses: 1. Compared to existing clustered FL methods, the proposed method needs to train an additional model. How much extra computation does it require? It would be helpful to provide the computation complexity of CAM. 2. While other parts of the paper are easy to understand, the convergence analysis part is limited in length and left many details in the appendix. Can you elaborate a little bit more on the proof idea? 3. The comparison and experiments on each dataset are thorough to me but the empirical results would be stronger if experimenting on more datasets. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Can you provide more details and some examples of the cluster-wise non-IID setting? 2. What is the cost of the warmup stage? How is the performance affected by reducing the warmup rounds? Overall, I think this is a solid paper that successfully addressed a widely existing fundamental problem in FL that has not been investigated before. The motivation and proposed idea are principal. Most parts of the paper are clear. The analysis and experiments are convincing. Experiments on more datasets and more guidance on the theoretical analysis can further strengthen the draft. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes. The authors have addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback regarding our 'clear motivation tackling a fundamental problem in clustered FL,' 'intuitive and convincing algorithms,' 'nice analysis for clustering collapse,' and 'high reproducibility.' Below, we provide our responses to the weaknesses and questions you mentioned in relation to this paper. ### Weakness Clarifications ***Extra cost of CAM*** Please refer to the general response. ***Convergence framework*** We will add the main steps of convergence analysis into the main paper in future works. The proof framework of convergence analysis is to bound the error of one communication round first, then add $T$ rounds together. For the error of one communication round, we break it into three parts, local training, clustering and aggregation, and bound each separately. ***Experiments on more datasets*** We will test CAM on more cases, such as graph and NLP in future works. ### Question Clarifications ***Examples of cluster-wise non-IID settings*** In the real world, clustering is very popular and used everywhere. For example, the Mall Customer Segmentation Dataset includes customer information like age, income, and spending score. Clustering can help identify different customer segments based on their shopping behavior. The 20 Newsgroups Dataset contains newsgroup documents from 20 different newsgroups, covering a wide range of topics. Clustering can be used to group documents with similar content, such as sports, politics, science, etc. ***Impact of warmup rounds*** Please refer to the general response. We have tried to address most if not all concerns. Please let us know if you have any further questions. Thank you! Best, Authors
Summary: This paper proposes a novel clustered federated learning method using additive modeling to tackle the clustering collapse problem. The paper contributes to advance the research domain of clustered federated learning which is an important and practical solution to solve non-iid problem in federated settings. Strengths: 1. The proposed method is simple yet effective. The overall solution is technique sounds. This work is very likely will become a new baseline of clustered FL. 2. The targeting problem is an inherent challenge of clustered FL. The Introduction section provides insight analysis of current clustered FL. 3. The overall flow description is sufficiently clear. 4. The provided theoritical analhysis is well suited to the clustered FL settings. 5. The claims are well supported by the experiments. Weaknesses: 1. The readability of this paper could be improved. For example, there are many abbreviation names, e.g. CAM, FED-CAM, IFCA, FeSEM. The symbol system is a little bit complicated. 2. The paper discussed several insightful challenge of clustered FL. However, it is how the proposed method can solve these challenges. 3. Some description needs to be improved. For example, it is unclear how the Figure 1 and 2 link to the Introduction section. Moreover, in Introduction section, there are many symbols without a clear definition. 4. This method may require storing multiple models locally (cluster-level model, global model, and local model), which could increase costs. It would be better to compare and discuss with other methods to explore potentially more favorable alternatives. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In Eq.1, why you use two symbols h(.) and f(.) to represent global model and cluster-specific model. Are they using different model architecture? 2. In Eq. 2, why is the additive results is Y rather than 2Y? 3. Please discuss how the proposed method can solve the mentioned challenges in Introduction section. 4. Is it necessary to have a comparison experiments with some personalized FL method, e.g. FedAvg + Finetuning? 5. In Ensemble FL, whatis the different part of K times experiments? 6. How about to add a parameter to adjust the importance weight between two models h and f? Say h(.) + a*f(.). 7. In Notations (line 128), what does the symbol "0" represent in the context of the symbol $\theta_{1:m}^0$ ? 8. Is the performance of the global model or the cluster-level model compared during testing? 9. How many clusters obtained in training phase? Is it necessary to keep the same number of clusters as the CAM method for methods that need to specify the number of clusters? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. The authors have addressed the limitations. There is no negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback regarding our 'simple-yet-effective method,' 'well-suited theoretical analysis,' and 'well-supported claims.' Below, we provide our responses to the weaknesses and questions you mentioned in relation to this paper. ### Weakness Clarifications ***How does CAM solve the mentioned challenges?*** - CAM solves the mentioned challenges by introducing a global model $\Theta_g$ and adding its output to the output of the cluster model $\theta_{c(i)}$ each client $i$ belonging to. The final prediction is the sum of the global model output and cluster model output $y = h(x; \Theta_g) + f(x; \theta_{c(i)})$. This removes any globally shared components from cluster models and thus prevents one cluster model from capturing the globally shared knowledge, which leads to clustering collapse. - Moreover, this also removes the conflicts between client clustering and global knowledge sharing, where the former moves clusters models away from each other and thus harms global sharing, because CAM optimizes the two objectives separately on two groups of model parameters. Therefore, the global model solely focuses on capturing the globally sharable knowledge, while the cluster models focus on intra-cluster shared but inter-cluster separable knowledge. - Furthermore, CAM improves robustness against outliers, which primarily impact $\Theta_{1:K}$ but exert minimal influence on the global model $\Theta_g$. Notably, the interplay between $\Theta_{1:K}$ and $\Theta_g$ renders CAM less susceptible to initial cluster assignments' variations, as adjustments to $\Theta_g$ induce changes in the clustering scheme. ***Extra cost of CAM*** Please refer to the general response. ***Readability*** For readability-related Weaknesses 1 and 3, most of them can be found in the introduction section. The full name of CAM can be found in the abstract, while Fed-CAM denotes "Federated Learning with CAM". In the introduction, IFCA and FeSEM have been cited in Line 84. Fig 1 and 2 have been discussed in Lines 79 and 89, respectively. ### Question Clarifications ***In Eq.1, why you use two symbols $h(.)$ and $f(.)$ to represent global model and cluster-specific model. Are they using different model architecture?*** Yes. The global model can have a different architecture as the cluster-specific model as long as their outputs are addable. ***In Eq. 2, why is the additive results is $Y$ rather than $2Y$?*** The additive results will be processed by softmax before being compared with $Y$. Moreover, the models are learnable so as the scales of their outputs. So only $Y$ is needed. ***Is it necessary to have a comparison experiments with some personalized FL method, e.g. FedAvg + Finetuning?*** Yes, we have included personalized FL methods as baselines in Section 2.2 of the supplementary material, in which Fedavg+Finetuning and some other personalized FL methods have been compared. ***In Ensemble FL, whatis the different part of K times experiments?*** In Ensemble FL, the different part of $K$ times is the initialization. ***How about to add a parameter to adjust the importance weight between two models h and f? Say $h(.) + a*f(.)$.*** It is not necessary because the importance weights can be automatically learned by the models, i.e., the model parameters can control the scales of the model output. For inference, an extra importance weight to makes it inconsistent with the training. ***In Notations (line 128), what does the symbol "0" represent in the context of the symbol $\theta^0_{1:m}$?*** Both $\theta^0_{1:m}$ and $\theta_{1:m}$ are local models but they are different: $\theta^0_{1:m}$ are trained and aggregated for the global model optimization, while $\theta_{1:m}$ are trained and aggregated for the cluster model optimization. ***Is the performance of the global model or the cluster-level model compared during testing?*** Yes. Global models such as FedAvg and FedProx and cluster-level models such as IFCA and FeSEM have been compared during testing in Tables 1-2 of the main paper. ***How many clusters obtained in training phase? Is it necessary to keep the same number of clusters as the CAM method for methods that need to specify the number of clusters?*** As shown in the first column of Tables 1-2, the chosen number of clusters is $\{1,5,10\}$, while the ground truth is $10$. And for the second question, YES, the same number of clusters is kept for comparison. Please let us know if you have any further questions. If our responses have sufficiently addressed your concerns, it would be great if you are willing to reconsider your score kindly. Thank you! Best, Authors --- Rebuttal Comment 1.1: Comment: Thanks for the clarification. Despite some heuristic aspects that may affect the robustness, the proposed method might be a viable solution to handle FL by introducing structural knowledge. --- Reply to Comment 1.1.1: Comment: Thanks for confirming our clarifications. Please let us know if you have further concerns. If most of your concerns have been addressed, it would be great if you are willing to reconsider your score kindly. Thanks!
Rebuttal 1: Rebuttal: ## General response Our proposed method mentioned warmup stage can improve the performance of clustered FL methods, and the framework is flexible to be integrated with existing FL methods. Reviewers show significant interest in these two points by asking for more details and support to these two points. We respond to them by adding two more analyses as below. ***Impact of Warmup Rounds*** As shown in Table 1 below, we gradually increase the rounds of the warmup stage (from 0 to 50) while keeping the total budget of rounds to 100 (warmup + training), considering the limited capacity of computation and communication for local devices in FL. The best performance is achieved when the warmup rounds are set to 20. However, the performance shows minimal variation when the number is set to 10, 20, 30, or 40. It demonstrates that the performance is stable when we choose warmup rounds in the area from 10 to 40. The choice of warmup round numbers exhibits low sensitivity, like on the parameter plateau. Notably, with no warmup rounds, performance is substantially decreased due to the impact of worse-performed initial candidates of the FL system. Similarly, when the warmup rounds are increased to 50, indicating insufficient training, the performance will drop accordingly. We need to ensure there are enough training rounds with a proper number of warmup rounds. In summary, a few warmup rounds can improve the stability of FL optimization and accuracy-related performance. Given a proper area, choosing warmup rounds is low sensitivity to performance. ***Extra Cost of integrating proposed CAM framework to existing FL methods*** For simplicity, we use 'FedAvg' as the measuring unit or benchmark for the cost of storage, communication and computation on local devices. In general, CAM will bring one extra 'FedAvg' cost to the existing FL methods every communication round. As for IFCA [ref 1], which needs to transmit $K$ cluster-specific models to each client to compute the clustering, applying our proposed CAM framework with IFCA, we need to transmit $K$ cluster models and one extra global model to the clients, that is $K+1$ models in total. The communication cost and storage cost are listed in Table 1. Moreover, the warmup stage only incurs one 'FedAvg' cost. Therefore, integrating CAM can even reduce the overall cost by increasing the number of warmup rounds. Lastly, considering the tradeoff between performance and cost, we choose 30 warmup rounds out of 100 as the default experiment setting. **Table 1**: Ablation study of warmup round numbers for performance and cost using "FedAvg" as the measuring unit (Other settings: CIFAR-10 dataset, IFCA, client-wise non-IID with Dirichlet distribution $\alpha = 0.1$, Cluster number $K=10$). For more details, please refer to Table 1 of the attached PDF. | Baseline | # Warmup + Training | Accuracy/% | Macro-F1/% | Storage cost/'FedAvg' | Communication cost/'FedAvg' | Computation cost/'FedAvg' | |:--------:|:-------------------:|:---------:|:---------:|:------------:|:------------------:|:----------------:| | IFCA | 0+100 | 47.62 | 23.36 | **10x** | 10x | 10x | | IFCA-CAM | 0+100 | 63.75 | 32.17 | 11x | 11x | 11x | | IFCA-CAM | 10+90 | 72.69 | 41.24 | 11x | 10x | 10x | | IFCA-CAM | 20+80 | **73.83** | **44.72** | 11x | 9x | 9x | | IFCA-CAM | **30+70** | 72.54 | 42.86 | 11x | 8x | 8x | | IFCA-CAM | 40+60 | 72.98 | 42.20 | 11x | 7x | 7x | | IFCA-CAM | 50+50 | 65.74 | 26.63 | 11x | **6x** | **6x** | [ref 1] Avishek Ghosh, Jichan Chung, Dong Yin, and Kannan Ramchandran. An efficient framework for clustered federated learning. Advances in Neural Information Processing Systems, 33:19586–19597, 2020. Pdf: /pdf/154507f4c8d288b391042abf25506578d4398501.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Scaling laws for language encoding models in fMRI
Accept (poster)
Summary: The authors evaluate the effect of LM scale (in terms of N parameters) and fMRI dataset size on the performance of downstream encoding models, trained to predict the activity of individual voxels across the brain as a function of the input those participants received during a scanning session. They explore both the OPT and LLaMa models as pure text based LMs and the HuBERT, WavLM, and Whisper models as acoustic models, across a number of sizes. The authors find that fMRI encoding performance can be improved linearly via logarithmic increases in model scale, up to ~30B for text-based models, and the asymptote has not yet been reached for acoustic models. Increases in the amount of fMRI data included per subject also yield dramatic improvements. They also attempt to characterize what additional encoding performance is yielded by the use of acoustic models over text-based ones and find improvements only in auditory areas but not in higher-level associative areas. Such results propose for increased focus on “deep” neuroscience data collection and scaling up the computational resources utilized for encoding analyses. Strengths: The work explores a novel and relevant axis for brain encoding analyses and is appropriately contextualized via discussion of related work. The submission is technically sound and most claims are well supported. The submission is clearly written and well organized. The results are relevant and important and will hopefully encourage a “scaling up” of neural encoding analyses. Weaknesses: The sole evaluation of LM scale as a function of N parameters is reductive and should be addressed. There is a lack or under-specification of uncertainty quantification. See below for more detailed comments. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: General: Aside from LM model size (N parameters) a crucial component in predicting LM performance is the size of the pretraining dataset (e.g., OPT’s 180B tokens vs. LLaMa’s 1.4T tokens). Given that the authors observe differences between the architecturally similar LLaMa and OPT model families, but these models differ substantially in pretraining dataset size, this is worth mentioning. Relatedly, for the combined analysis of the LLaMa and OPT models, a more unified axis along which to evaluate the encoding performance of these models might be their mean perplexity across the evaluated podcast stories dataset. I strongly encourage the authors to consider this visualization as a parallel to 1a. It is both likely to be informative, and also proposes a more functional view of the problem space. One would be unlikely to encourage the use of a larger but undertrained 30B model over a smaller 6B model trained on more data with lower perplexity for an fMRI encoding task, particularly due to the author-mentioned advantages of more compressed representations and suggestion that “a careful balance must be struck between model size and model efficacy in order to maximize encoding performance”. Error bars: What do the error bars in figure 1b and 1e represent? (stdev, se, x% ci, etc.?) No uncertainty quantification is included for any other figures. Given that “Encoding model performance for a given layer was computed as the average voxelwise performance of that layer’s hidden states across of all of cortex for all of our 3 subjects”, there would be a number of ways to include uncertainty quantification at this top level even without rerunning any models. Minor: increase resolution of figures 1a,1b,1d,1e,3b,3c. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: The limitations of the work are appropriately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and comments. We have included responses to your questions below. >Weaknesses: >The sole evaluation of LM scale as a function of N parameters is reductive and should be addressed. There is a lack or under-specification of uncertainty quantification. See below for more detailed comments. >Questions: >General: >Aside from LM model size (N parameters) a crucial component in predicting LM performance is the size of the pretraining dataset (e.g., OPT’s 180B tokens vs. LLaMa’s 1.4T tokens). Given that the authors observe differences between the architecturally similar LLaMa and OPT model families, but these models differ substantially in pretraining dataset size, this is worth mentioning. You are correct, we should have included this detail. We will mention this in an appropriate place in the camera-ready version of this paper. >Relatedly, for the combined analysis of the LLaMa and OPT models, a more unified axis along which to evaluate the encoding performance of these models might be their mean perplexity across the evaluated podcast stories dataset. I strongly encourage the authors to consider this visualization as a parallel to 1a. It is both likely to be informative, and also proposes a more functional view of the problem space. One would be unlikely to encourage the use of a larger but undertrained 30B model over a smaller 6B model trained on more data with lower perplexity for an fMRI encoding task, particularly due to the author-mentioned advantages of more compressed representations and suggestion that “a careful balance must be struck between model size and model efficacy in order to maximize encoding performance”. To address your concern, we have computed the mean perplexity across one of our test stories for all but our largest model (which we cannot rerun due to limitations in our computational budget). We would like to note that the perplexities are not directly comparable across the different model families as they employ different tokenization schemes. The table is included below and will be included in some form in the final version. | Model | Perplexity | | | | |----------|------------|---|---|---| | OPT-125M | 35.9191 | | | | | OPT-1.3B | 22.4842 | | | | | OPT-13B | 18.1750 | | | | | OPT-30B | 17.3551 | | | | | OPT-66B | 16.5627 | | | | | LLaMA-30B | 10.2188 | | | | | LLaMA-65B | 9.7363 | | | | >Error bars: >What do the error bars in figure 1b and 1e represent? (stdev, se, x% ci, etc.?) We apologize for the oversight. These error bars represent standard error. This will be clarified in the camera-ready version. >No uncertainty quantification is included for any other figures. Given that “Encoding model performance for a given layer was computed as the average voxelwise performance of that layer’s hidden states across of all of cortex for all of our 3 subjects”, there would be a number of ways to include uncertainty quantification at this top level even without rerunning any models. We chose to omit error bars in Figure 1a and 1d as the only sources of uncertainty quantification would be along the voxel axis or the subject axis. Quantifying uncertainty along the voxel axis would be more misleading than helpful as some voxels are inherently less responsive to language. Quantifying uncertainty along the subject axis is also not necessary here as we plot the individual performance for each subject as well as the mean. Figure 2 similarly shows uncertainty estimation by plotting all individual time courses of the 10 test story repeats. We will add to the final camera-ready version SNR-normalized subject-axis uncertainty estimation to Figures 1c and 1f, which should give a sense of how the shape of the layerwise encoding performance curve varies from subject to subject. This uncertainty estimation can be seen in Figure 2 of the rebuttal PDF. We are happy to include additional uncertainty estimation in any other figure however we are unsure of where else that information would be beneficial or does not already exist. >Minor: >increase resolution of figures 1a,1b,1d,1e,3b,3c. Apologies. We intend to revise this for the final camera-ready version. --- Rebuttal Comment 1.1: Title: Comments addressed Comment: Thanks for addressing this review and for supplying new content. > You are correct, we should have included this detail. We will mention this in an appropriate place in the camera-ready version of this paper. > We apologize for the oversight. These error bars represent standard error. This will be clarified in the camera-ready version. > Apologies. We intend to revise this for the final camera-ready version. Great. > To address your concern, we have computed the mean perplexity across one of our test stories for all but our largest model (which we cannot rerun due to limitations in our computational budget). We would like to note that the perplexities are not directly comparable across the different model families as they employ different tokenization schemes. The table is included below and will be included in some form in the final version. Ah, yes, you are right about tokenization, but thanks for producing this table. Even with that caveat, this is still an informative supplement. > We chose to omit error bars in Figure 1a and 1d as the only sources of uncertainty quantification would be along the voxel axis or the subject axis. Quantifying uncertainty along the voxel axis would be more misleading than helpful as some voxels are inherently less responsive to language. Quantifying uncertainty along the subject axis is also not necessary here as we plot the individual performance for each subject as well as the mean. Figure 2 similarly shows uncertainty estimation by plotting all individual time courses of the 10 test story repeats. We will add to the final camera-ready version SNR-normalized subject-axis uncertainty estimation to Figures 1c and 1f, which should give a sense of how the shape of the layerwise encoding performance curve varies from subject to subject. This uncertainty estimation can be seen in Figure 2 of the rebuttal PDF. Okay, all these decisions are reasonable and the proposed actions look good. Thanks!
Summary: Researchers compared the effectiveness of larger open-source language models, such as those from the OPT and LLaMA families, in predicting brain responses recorded using fMRI. They found that brain prediction performance improves logarithmically with model size, with a 15% increase in encoding performance as model size increases. Similar improvements were observed when scaling the size of the fMRI training set. The study also explored the scaling of acoustic encoding models and found comparable improvements. The analysis suggests that increasing the scale of both models and data will lead to highly effective models of language processing in the brain, enabling better scientific understanding and applications such as decoding. Strengths: - Presents a novel empirical observation of scaling laws for language and audio encoding models in fMRI. - They show log linear scaling in brain prediction, encoding performances in both language and acoustic domains. - They also show when scaling the size of the fMRI data, the performances increase log-linearly. Weaknesses: Most of my concerns are addressed in the questions section. One concern I have is that the code is not publicly available. If the authors published the code it would be greatly appreciated. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - Wernicke's area is often paired and observed alongside Broca's area. Did the authors consider observing the Wernicke's area for both auditory and language encoding models? - In section 3.4 figure 3a, the "room for improvement" seems to be slightly more prominent in the right hemisphere. Please note that this observation is purely from the figure. Have the authors explored why this may be the case? - To observe the scaling laws in increasing data size, the authors withheld different amounts of the training data. Did the authors consider augmenting the data in other ways? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors adequately addressed the limitations of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and thoughtful comments. We have included responses to your questions below. >Weaknesses: >Most of my concerns are addressed in the questions section. One concern I have is that the code is not publicly available. If the authors published the code it would be greatly appreciated. While not currently available, we intend to include a link to code as well as pretrained encoding models that can be used to help replicate our results in the camera-ready version. >Questions: >Wernicke's area is often paired and observed alongside Broca's area. Did the authors consider observing the Wernicke's area for both auditory and language encoding models? The area we have defined as “left auditory cortex” is also referred to as Wernicke’s area. We apologize for the confusion and will mention this in an appropriate place in the camera-ready version. >In section 3.4 figure 3a, the "room for improvement" seems to be slightly more prominent in the right hemisphere. Please note that this observation is purely from the figure. Have the authors explored why this may be the case? This seems to be a subject-specific effect, as the other two subjects (presented in Appendix E) do not have the same lateralization. Given the small number of subjects, it is difficult to productively speculate on the reason for this. >To observe the scaling laws in increasing data size, the authors withheld different amounts of the training data. Did the authors consider augmenting the data in other ways? There may be other ways of withholding data, such as based on its semantic content or original source, to analyze the effects on model performance, if that is what you mean by augmentation. However, we think such analyses would likely be beyond the intended scope of this work. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification! I will keep my score. I wish the authors the best of luck!
Summary: The authors study how the scaling of large language models produce accurate features to predict human brain fMRI activity. They report scaling-like laws for fMRI encoding models – models with more parameters tend to more accurately predict fMRI activity (as measured with the Pearson correlation of the encoding model predictions and fMRI time series). Strengths: I found the paper to be of general interest, and the methods to be sound. The paper suggests that there is a utility to using modern large ML models as producing features for fMRI encoding models. Weaknesses: It is hard to determine whether this result should be expected, and what to make of its scientific significance (beyond an engineering feat). On one hand, one would expect that models that are larger (and therefore better at modeling language/sound data) should produce more useful features for language/sound data. On the other hand, these large models process language/sound data in ways that are almost certainly distinct from how the brain processes information (e.g., the brain doesn’t appear to merely do next-word prediction). So what does one make of this discrepancy? Do the authors believe that the brain mimics the computational processing of these models? And what is the scientific significance of using a model – that presumably looks nothing like the brain – to build better models of the brain? How valuable is using percent correlation change as a metric for how well the encoding models perform relative to, for example, R^2? Is it truly useful to benchmark or establish ‘scaling laws’ based on the average encoding performance across all fMRI voxels, rather than identify one or two key regions (or the variability across regions)? What might be interesting to evaluate is whether these scaling laws only apply to a subset of voxels/brain regions, since, presumably, how well a voxel/region follows a scaling law is likely dependent on how well a brain region’s function matches with the model type. Technical Quality: 3 good Clarity: 3 good Questions for Authors: For an ‘fMRI encoding model practitioner’, it would be helpful if there were concrete ‘recommendations’ for what models are most appropriate for language/audio modeling. For example – and this is a minor point – does it matter which layers one uses to produce the regression model features? If possible, it would be interesting to see a surface map that shows which voxels/regions follow a scaling law for the audio v language models, respectively. How dependent is the scaling law on computing the average (across voxels) for each of the maps per model? Can the authors also report R^2 for their encoding models benchmark? What does it mean (scientifically) that the model that best produces regression features for brain encoding models look nothing like the brain? Are there conclusions one should draw from this? Or is this just a coincidence? Might there be any correspondence between layers in an LLM and different brain regions? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitations are mostly in terms of the scientific utility of claiming that there are scaling laws for fMRI. I doubt the authors intend this interpretation, but the title seems to hint at the possibility that if one achieves enough scale with language models, one might be able to perfectly predict brain activity (and therefore model the brain). This seems implausible (and likely not intended), but some discussion around this would be helpful. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and thoughtful comments. We have included responses to your questions below. >Weaknesses: >It is hard to determine whether this result should be expected, and what to make of its scientific significance ... So what does one make of this discrepancy? Do the authors believe that the brain mimics the computational processing of these models? And what is the scientific significance of using a model – that presumably looks nothing like the brain – to build better models of the brain? We thank the reviewer for raising this point, as it is an important one. The suggestion that language models share fundamental commonalities with human brains is an ongoing discussion - see Schrimpf et. al., Goldstein et. al., and Caucheteux et. al., c.f. Antonello and Huth for more details. However, regardless of whether these models are mechanistically similar to the brain, they remain the most powerful fine-grained predictive models of biological language processing that have ever existed, and that alone gives them high scientific and practical value. While prediction should not be conflated with explanation, powerful predictive models of brain activity provide a uniquely useful pathway to eventual explanation. Sufficiently powerful encoding models can be used to simulate brain activity *in silico*, allowing for the rapid simultaneous testing of thousands of hypotheses that would be impossible to test *in vivo*. Furthermore, encoding models play an important role in Bayesian decoding techniques which have tremendous practical value – see Tang et al. 2023 for further discussion. > How valuable is using percent correlation change as a metric for how well the encoding models perform relative to, for example, R^2? Is it truly useful to benchmark or establish ‘scaling laws’ based on the average encoding performance across all fMRI voxels, rather than identify one or two key regions (or the variability across regions)? What might be interesting to evaluate is whether these scaling laws only apply to a subset of voxels/brain regions, since, presumably, how well a voxel/region follows a scaling law is likely dependent on how well a brain region’s function matches with the model type. Appendix B in the supplement gives a voxelwise breakdown of parameter scaling as a percentage improvement from our OPT-125M model to our OPT-30B model. As an additional analysis, we have now also generated flatmaps of the slopes of the individual voxelwise scaling laws for both data scaling and parameter scaling (see Figures 3 and 4 in the rebuttal PDF). We see that for data scaling, almost all voxels associated with language processing in cortex benefit from additional data. Scaling laws for parameters are a bit more complicated, but it seems that in general, areas that are typically associated with high-level language processing and cognition are those that benefit the most from increasing parameter size. >For an ‘fMRI encoding model practitioner’, it would be helpful if there were concrete ‘recommendations’ for what models are most appropriate for language/audio modeling. For example – and this is a minor point – does it matter which layers one uses to produce the regression model features? We believe that this is addressed in the current draft – see Figures 1c and 1f which show which layers perform best in each model. We would recommend layer 18 from the LLaMA-30B model and layer 20 from WavLM-317M, as these layers perform well and can be computed with relative ease. In cases with larger amounts of data than we possess, we would suggest using representations from larger models. We will add an explicit mention of these suggestions in the final version. Pretrained weights of these models will be released as well in the camera-ready version. >If possible, it would be interesting to see a surface map that shows which voxels/regions follow a scaling law for the audio v language models, respectively. Yes, we will include this in the supplement, and it has been attached to this rebuttal. >How dependent is the scaling law on computing the average (across voxels) for each of the maps per model? Some voxels scale more than others, but scaling is a brain-wide phenomenon that impacts nearly all language-selective brain regions positively. This is observable from the flatmaps mentioned above. >Can the authors also report $R^2$ for their encoding models benchmark? Unfortunately, these models were trained to optimize correlation and not $R^2$, so computing $R^2$ directly would not be meaningful. Retraining the models with the $R^2$ objective would fall outside our compute budget, but we will include a plot similar to Figure 1c showing the average $r^2$ across voxels, which bounds $R^2$ from above. This can be found in the rebuttal PDF (Figure 2). We will also be correcting a notational oversight and converting occasional mentions of $R$ to $r$. >What does it mean (scientifically) that the model that best produces regression features for brain encoding models look nothing like the brain? Are there conclusions one should draw from this? Or is this just a coincidence? We would not describe it as a coincidence. It makes sense that two systems that can both engage with language in a high-level way will have shared information about that language. This does not entail that they are performing the same computations or are “learning” in the same way. Your question is part of a larger scientific debate about encoding models that extends beyond the scope of this particular paper. >Might there be any correspondence between layers in an LLM and different brain regions? For language models we don’t find this to be the case. However, for audio models there does seem to be some correspondence, where earlier layers predict primary auditory cortex better and later layers predict higher-order auditory cortex better. This is shown in Figure 4c, with the center-of-mass attribution analysis in the stacked model. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I think the results of this paper are of useful scientific fact and value to the greater community. I raise my rating from 6 to 7.
Summary: Previous works have demonstrated that activations from Language Models fit, to some extent, brain activations in participants listening to audio texts. Here, the authors examine how the fit is influenced by model size (number of parameters) and the training dataset size. The main observation is that performance scales log-linearly with model size (until 30B parameters) and with fMRI training set size (from 1 to 100 stories). Strengths: The main originality of the work is to explore language models that are much bigger than the ones that had been used in previous works of this type (as well as recent, large, acoustic models). Not only does this represent a technical feat, but it also advances knowledge on the topic, as it was not clear if the representations constructed by larger models would necessarily converge towards more and more brain like representations when the models become bigger. Data presented here show that there was still a lot of room for improvment from GPT-2 like models. The raw performance of the best model (OPT-30B), presented in Fig.2 is quite impressive and, along with the analysis of the effect offmri training set size, buttresses the authors' claim of the usefulness of using large within subject dataset. Fig.3b which compares performance to estimated ceilings is very interesting theoretically as it suggest that the models model some regions better than others. Weaknesses: As the authors mention (at line 300), when models grow in size, the regression problem may become ill conditioned. Would there be any strategy to remedy this problem? Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Figure 1 shows the increase in performance averaged _across the whole cortex_. Have you checked whether this profile is similar in different regions (e.g. auditory cortex, core language region, and higher-level regions like precuneus and medial prefrontal)? Were the (OPT) models of different sizes trained with the same language corpora (fixed size)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The limitations were adequately discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and thoughtful comments. We have included responses to your questions below. > Weaknesses: > >As the authors mention (at line 300), when models grow in size, the regression problem may become ill conditioned. Would there be any strategy to remedy this problem? Naturally, poor conditioning is difficult to remedy algorithmically without relying on additional data as nothing will get around the fact that there is insufficient data to mathematically constrain the regression problem. Still, there may be alternative methods for enabling the continued scalability of encoding models without relying on unreasonably sized single-subject datasets. For example, instead of using ridge regression for the linear readout, one might consider using a bottlenecked neural network trained simultaneously on data from multiple subjects to assist in dimensionality reduction. This may help with regularization as it enables the model to learn a general, low-dimensional joint semantic representation that works well across many subjects. We believe that the possibility of jointly training language encoding models across subjects is worthy of additional study. >Questions: > >Figure 1 shows the increase in performance averaged across the whole cortex. Have you checked whether this profile is similar in different regions (e.g. auditory cortex, core language region, and higher-level regions like precuneus and medial prefrontal)? We have, and appendix B gives some indication of the percentage scaling improvements for each voxel/ROI. We agree that this may not be fully addressed, however, and will include an additional flatmap showing the slope of the scaling law per voxel in the supplement (Figures 3 and 4 in the rebuttal). >Were the (OPT) models of different sizes trained with the same language corpora (fixed size)? Yes, according to the original OPT paper, all models were trained with the same training set of 180B tokens. We will include a clarification of this detail in an appropriate place in the main text. --- Rebuttal Comment 1.1: Comment: Thanks for the clarifications: I confirm my original rating (strong accept)
Rebuttal 1: Rebuttal: Thank you to all our reviewers for their detailed and thoughtful feedback. For our general response, we are attaching a series of figures that were requested or that we believe resolves the outstanding concerns that each of you have voiced. The figures will each be included somewhere appropriate in the supplemental material of the final camera-ready version. Here are descriptions of the figures, in order: Figure 1. A histogram of the relationship between data scaling and parameter scaling, meant to flesh our our argument surrounding the conditioning of our ridge regression past the 30B plateau we observe. We see that as model size increase, the average slope of voxelwise data scaling laws increase as well. Figure 2. A recreation of Figure 1c using an average $r^2$ metric. We see that using this metric, the performance of the best layer in OPT-30B is slightly better than the performance of the best OPT-175B layer, which further supports our point about conditioning. Figure 3. A flatmap of voxelwise data scaling laws. The redness of the voxel indicates the degree to which that voxel improves with additional data. Figure 4. A flatmap of voxelwise parameter scaling laws. The redness of the voxel indicates the degree to which that voxel improves with larger language models. We have attempted to address specific points mentioned in your reviews in each of the reviewer-specific rebuttals. Thanks again for your feedback. Pdf: /pdf/57bd42ca219abcb3eb263ae61aa162f38a6ce61e.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors of this paper delve into the investigation of the scaling law between the performance of predicting brain activity (measured using BOLD) and the number of parameters in large language models, coupled with the amount of training data for the linear readout for retrained LLM used in AI. This pursuit of exploring a log-linear scaling law relationship is an interesting and valuable. It holds promise in providing crucial information for paving a path forward in developing better functional models of brain activity, which could be instrumental in understanding brain computations. The technical dimensions of the paper are well-executed. The authors have displayed a solid grasp of the subject matter and have employed appropriate methodologies to carry out their research. However, I have some reservations about the validity of their claims concerning the non-saturation of the scaling law for the number of parameters in large models. Specifically, Figure 1, which is central to their argument, seems to present evidence contrary to their claim. They acknowledge a visible saturation but attribute this observation to memory limitations in the way they fit the linear readout (i.e. they select one layer). However, unless they successfully demonstrate that the saturation does not occur when a more sophisticated linear readout is utilized (for example, from all layers for larger models), it is not justified for them to assert that the predictive models demonstrate scaling as a function of parameter size. Thus, the authors need to refine their claims or provide additional analysis to convincingly demonstrate that the saturation effect they observed is truly an artifact of their methodological limitations, not an inherent feature of the models they are studying. Without such demonstrations, the claim about the non-saturation of the scaling law remains unsubstantiated. In conclusion, while the paper tackles an intriguing concept and showcases strong technical elements, the authors need to strengthen their assertions through concrete evidence or refinement of their conclusions. This additional effort will greatly enhance the paper's overall quality and the validity of their findings, adding substantial value to the body of knowledge in the field. Strengths: The question of how far scaling deep learning models can continue to improve neural prediction is an important question in the quest to build more accurate models of the brain. Weaknesses: More work is needed to demosncate that the current LM are not saturating the performance of the neural prediction task. This is very critical since the main contribution of this work is that even the largest LM do not saturate the performance. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The authors need to demonstrate using other linear readouts that there is not saturation of performance in the neural prediction task. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >More work is needed to demonstrate that the current LM are not saturating the performance of the neural prediction task. This is very critical since the main contribution of this work is that even the largest LM do not saturate the performance. Thank you for your review and thoughtful comments. We agree that we did not adequately prove that the performance plateau after 30B parameters is a side-effect of the poor conditioning of the regression problem at that size, so we have performed an additional analysis to strengthen our argument. Doing a more sophisticated linear readout from multiple layers is currently out of reach, as this would be too costly to realistically train and would have even worse conditioning, with the number of features far exceeding the number of training points from our fMRI dataset. Instead, we performed an additional analysis regarding the relationship between data scaling and parameter scaling. Conditioning of the encoding regression problem depends on both the size of the dataset and the number of parameters: more data is good, more parameters is bad. If the problem with the large models is not poor conditioning, then we would expect dataset size to be less impactful than for smaller models. Conversely, if conditioning is the culprit, then we would expect the large models to show stronger data size scaling than the smaller models. Our analysis shows the latter: for larger language models performance scales more strongly with dataset size. Figure 1 of the rebuttal PDF shows a histogram with the distribution of voxelwise scaling laws (i.e. the slope of the log-linear relationship) across cortex for one subject, filtered by a minimum $cc_{max}$ (i.e. only in voxels with sufficient signal to model). We see that the distribution is shifted to the right for the larger language model, showing that encoding models built from larger LMs on average benefit more from additional data. This makes sense, as large models are those that are most affected by poor conditioning. As it seems unlikely that the representations from larger language models contain less information than those of smaller models, we suggest that the plateau we observe at 30B is mostly the result of limited dataset size. Given this result, we intend to refine and clarify the paper text, suggesting that it has to do mostly with data constraints and that with sufficient amounts of data, parameter scaling would likely continue to improve above 30B parameters. We found this result by comparing data scaling between the 125M and 30B OPT models, which we had already fit. Ideally we would have also tested the 175B OPT model, but testing dataset scaling is computationally expensive and re-fitting those models is substantially beyond our compute budget. Furthermore, in responding to another reviewer, we recomputed Fig. 1c with a sum of $r^2$ metric (Figure 2 in the rebuttal PDF) and found that by using this metric, the model trained by OPT-175B is slightly better than the OPT-30B metric. The sum of $r^2$ metric favors increases in encoding performance in well-predicted voxels over poorly predicted voxels. As well-predicted voxels will be more resilient to poor conditioning, this result aligns with our conditioning argument as well. We hope that this argument is satisfactory to you and allays your concerns.
null
null
null
null
null
null
Revisiting Implicit Differentiation for Learning Problems in Optimal Control
Accept (poster)
Summary: This paper considers the optimization problem in the context of discrete-time control of dynamic systems. The paper proposes a method for efficiently and effectively evaluating the Jacobian of saddle points of constrained optimal control problems w.r.t. COC problem parameters. The main technical result is that the computation cost grows linearly with the number of time steps. The proposed approach is evaluated on standard environments like quadrotors and cart-poles. Strengths: The whole paper is well-written and easy to follow. I appreciate the writing in the technical section (Secs. 3-5), which unfolds the core technical method in a nice and coherent way. The main technical idea is solid and sound. Fig. 2 turned out to be quite helpful when I was checking the equations. The linear time complexity follows naturally after Proposition 1 is established. Weaknesses: I don’t see major fallacies in the main technical result itself, but I still have a few high-level concerns regarding the novelty and usefulness of the proposed technique: - I feel that Propositions 1, 2, and Fig 2 are direct results of applying existing numerical and mathematical techniques in derivative derivations. From what I understand, no new tools were developed in proving these results. As I am not well-calibrated with NeurIPS’s bar, I will let other reviewers decide how novel the technical method is. - The writing in the abstract and introduction gives me the impression that the gradient computation is fully parallelizable. I can see how multiplications involving H^(-1) can be parallelized. However, solving the tridiagonal block matrix seems like a sequential procedure to me. I am guessing from reading Lines 234-239 that the implementation used a sequential solver with O(T) time complexity and was the major time bottleneck (correct me if I am wrong). It would be useful to profile the time cost of each step that can and cannot be parallelized. - The experiments didn’t seem to evaluate the proposed method in large-scale problems. Dynamical systems (and their gradients) like quadrotor can be computed extremely fast (orders of magnitude than real time) because of their few degrees of freedom. The number of time steps (<=50 if I understand correctly from Table 1) does not seem to create a large-sized Hessian (a few hundred by a few hundred?) challenging enough to compute for modern CPUs. Having 2x speed up is still nice, but I had a higher expectation for a parallel (?) linear algorithm given that the baseline is sequential and quadratic. - On a related note, I didn’t find visualizations of the quadrator/cart-pole/etc environments in the main paper or the supplemental material. Having some images to visualize these tasks would be great. A very minor comment on writing: Some sentences are quite long and hard to follow, e.g., lines 21-23 and lines 31-34. Splitting them into several small sentences would be better. Some technical comments: - Lines 124 and 125: is R^p a typo? It makes more sense to me if we replace R^p with R^d as d is the dimension of \theta. What is p? - Line 150: Why is a transpose operator needed? - Proposition 1: same question about p in the definitions of B and C; should it be replaced with d? - Proposition 1: Why is C defined as the Jacobian of h w.r.t. \theta? It makes more sense to me if it is the Jacobian of r w.r.t. \theta, which is also implied by the dimension n_r. - Proposition 2: I feel the linear time complexity O(T) assumes the dimension of \theta does not grow with the number of time steps, or H^(-1)B won’t be computable in O(T) time. Please correct me if I am wrong. - Eqn. (5): it seems more common to flip the sign of lambda and replace -A^T with A^T so that the matrix is symmetric. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See my comments above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See my comments above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review and kind words around the technical writing. > **From what I understand, no new tools were developed in proving these results.** We agree that no new mathematical tools were developed. However, identifying and exploiting the link between the IFT identities of Gould et al. and optimal control is non-trivial, as evidenced by it being missed by both the learning and control communities. We specifically want to debunk prior statements we observed in the literature that approaches based on IFT identities are inherently quadratic. We believe that such assertions need to be refuted formally by demonstrating that there are surprisingly meaningful numerical scalability and stability improvements offered by the novel application of IFT identities to control. We believe that our results (both theoretical and numerical) therefore represent an important novel contribution to both the learning and control communities. > **However, solving the tridiagonal block matrix seems like a sequential procedure to me. I am guessing from reading Lines 234-239 that the implementation used a sequential solver with O(T) time complexity and was the major time bottleneck (correct me if I am wrong).** Thank you for drawing our attention to the apparent disconnect between our discussion of parallelization and the solution of the tridiagonal block matrix. We will clarify our discussion in the abstract and introduction by highlighting that it is currently easy to parallelize everything except the solution of the tridiagonal block matrix, but that there are a growing number of methods (some of which we cite) that propose parallelizable block tridiagonal solvers. Unfortunately, the code for these parallelizable block tridiagonal solvers appears unavailable. As a result, we implement a sequential procedure for the block tridiagonal system in Python. Despite this, we still observe significant improvements in computational performance. We envision that future availability and development of custom (parallelizable) block tridiagonal solvers will yield further reductions in computation time. > **It would be useful to profile the time cost of each step that can and cannot be parallelized.** For $p=10$k and $T=1$k, our most challenging setting, we observe that the serial computation (block tridiagonal solve using the Thomas algorithm) occupies approx. 45% of total compute time for IDOC full, whereas for IDOC VJP it occupies 18%. For a fixed $p$, this ratio does not change with horizon length $T$. The ratio for $p=1$k is 39% and 36% for IDOC full and VJP, respectively. This indicates that the importance of the serial computation diminishes as the problem size (w.r.t. parameters) increases. The serial component is still significant however, and warrants developing parallel block tridiagonal solvers to further improve scalability. > **The experiments didn’t seem to evaluate the proposed method in large-scale problems.** We appreciate you pointing out that our first set of experiments, whilst showing a 2x speedup, could have been more compelling. We have therefore conducted additional large-scale experiments as detailed in the global response and attached pdf in which the dimensions of the parameters, states, and controls are large enough for parallelization to yield benefits over serial computation. These additional results (cf. Fig. 1) offer compelling evidence that our IDOC VJP method is approximately 10x faster across all horizon lengths $T$ for a large-scale problem ($p=10000$), with further gains possible as the number of learnable parameters increases. > **On a related note, I didn’t find visualizations of the quadrator/cart-pole/etc environments in the main paper or the supplemental material. Having some images to visualize these tasks would be great.** We will add visualisations (or suitable references) for the benchmark problems to the supplemental material. > **A very minor comment on writing: Some sentences are quite long and hard to follow, e.g., lines 21-23 and lines 31-34. Splitting them into several small sentences would be better.** We agree that some sentences in the submission are too long and could be made clear, thank you for bringing this to our attention. Particular attention will be paid to those you have identified. ### Technical Comments Thank you for carefully reading our work and picking up these typos! We will address these in order. * Yes, this is a typo! $p$ should be $d$. * We assume $\text{D}\mathcal{L}_\xi(\xi)\in \mathbb{R}^{(n+m)T + n \times 1}$ is a column vector. We will make this more explicit. * Correct! $p$ should be $d$ here as well. * Yes, this should indeed be $r$. * As is common in (optimal) control, we assume that $\theta$ is static across time and is comprised of all relevant parameters across all (time-varying) cost and constraint functions. * Agreed, happy to make it symmetric and perform a sign change elsewhere. --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: After reading the rebuttal, I remain supportive of the paper and will raise my score to weak accept. I think there is still room for improving the experiments, but I am OK with the current paper with all the promised changes incorporated.
Summary: This paper analyzes and develops an efficient method to differentiate through a constrained discrete-time optimal control system, i.e., computing the derivative of an optimal trajectory of a constrained discrete-time optimal control system with respect to the parameters in the system’s cost function, dynamics, and/or constraints, based on the implicit function theorem (IFT). This is a key problem in many learning problems, such as inverse optimal control (inverse reinforcement learning, system identification, or learning from demonstration). The contributions of this paper include (1) the authors analyzed the sparse structures of IFT equations, and showed that the linear time complexity (in the trajectory horizon) can be obtained for solving IFT, instead of quadratic complexity. (2) Based on the analysis, the authors develop an efficient algorithm by parallelization and a vector jacobian auto-diff algorithm. The algorithm is evaluated in comparison the state-of-the-art (PDP [18] and Safe-PDP [20]) for various tasks, the proposed method is shown at least 2x faster and more numerical stability. Strengths: The study of differentiable optimal control is well-motivated, and the problem relates to many problems in learning and robotics. The authors analyzed sparse structure of the IFT equation for the constrained optimal control problem, and showed the linear complexity (in trajectory horizon) of differentiating through optimal control. Besides, they established the connection between solving IFT equation and PDP. Based on the analysis, they develop parallelize and vector Jacobian Products algorithms to accelerate the backward pass, which have been shown effective and stable. The algorithm is novel and has shown obvious improvement over the existing method (mainly PDP and Safe-PDP). I believe the algorithm will be useful and of interest of control and learning communities. The paper is well organized and well presented, and I expect the proposed method is important in practice, Weaknesses: In order to further improve the paper, some claims may need to be further clarified. - Line 159: since the algorithm requires the identification of a set of active constraints, \tilde{g}_t, from all inequality constraints, will the use of a threshold cause the numerical issues, eventually leading to bad quality gradient and unstable gradient descent? - In Fig. 4 cartpole example, is the instability of Safe PDP without log-barrier functions because of the numerical issues of identifying active and inactive inequality constraints? - In my understanding, Safe PDP using log-barrier function has two benefits: first, it avoids the needs of active identification of inequality constraints, and 2) it creates some smoothing effects in the trajectory solution space because the non-differentiability caused by the switch between “active” and “inactive” constraints in the change of parameters. However, as the authors point out, Safe PDP with barrier function is only an approximation method to compute the gradient, thus it is not surprising to have a larger loss, as shown in Fig.4. - Currently, most of the examples in experiments consider the loss and stability performance. In order to support the claim of "2x" more efficiency, I think more experiments about algorithm speed test (with respect to PDP) should be done. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Please find my question in the "weaknesses" section. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Please find in the "weaknesses" section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We will address your questions as follows: > **Line 159: since the algorithm requires the identification of a set of active constraints, \tilde{g}_t, from all inequality constraints, will the use of a threshold cause the numerical issues, eventually leading to bad quality gradient and unstable gradient descent?** This is potentially an issue in practice, and this instability is well known when designing numerical solvers which use the so-called ``active-set" approach. While we didn't experience issues around constraint switching in our experiments, we acknowledge that it could be an issue in practice and may be encountered when scaling these approaches to more difficult problems. > **In Fig. 4 cartpole example, is the instability of Safe PDP without log-barrier functions because of the numerical issues of identifying active and inactive inequality constraints?** We're not sure if the instability of Safe-PDP without barrier functions is caused by identification of active and inactive constraints. While this claim is made in the Safe PDP paper (bottom on page 9 under Problem III), our approach IDOC also needs to identify the active constraints. Of course, we use the same threshold for identifying constraints for a fair comparison. We believe that constraint switching and identification in and of itself is not the root cause of instability for Safe-PDP. We will add this commentary into the manuscript since this is a very useful piece of analysis which arises from the experimental results. > **In my understanding, Safe PDP using log-barrier function has two benefits: first, it avoids the needs of active identification of inequality constraints** We believe that the identification of active/inactive constraints in and of itself is a fast, simple operation (evaluate constraint functions and apply threshold). However, varying numbers of active constraints along the trajectory make it difficult to vectorize the computation of the trajectory derivatives, due to varying-sized blocks. This property generally increases computation time compared to using the log-barrier method. Otherwise, the identification of constraints does not appear (at least, in our experiments) to impact gradient quality. >**it creates some smoothing effects in the trajectory solution space because the non-differentiability caused by the switch between “active” and “inactive” constraints in the change of parameters.** We agree that the smooth log-barrier function may yield superior gradient information during learning, since constraints don't need to be satisfied to provide gradient information. This may explain why the log-barrier approach reduces imitation loss faster at the start compared to IDOC. However, as you correctly point out, IDOC provides lower final imitation loss for many of the experiments. This appears to be due to IDOC correctly identifying the hard constraint values. We provide further insight around this in the global response.
Summary: There has been recent interest in differentiating through trajectories to obtain first-order derivatives for optimization problems including policy learning, inverse optimal control and model learning. Previous work usually uses a backward recursion which computation scales quadratically with the length of the horizon. In this work, the derivatives are computed exploiting the block-diagonal structure with computational time linearly varying with the length of the horizon. Additionally, it provides the flexibility of parallelizing the computation. This method can also directly compute vector-jacobian products easily. Most of these advantages are verified through training iterations with imitation learning and constrained inverse optimal control tasks on various robotics benchmarks. Strengths: A novel, interesting method to parallelize and accelerate the computation of derivatives is presented by exploiting the block diagonal structure of matrices. This method is numerically better-conditioned as compared to existing algorithms such as PDP and safe-PDP. Further, it is easily amenable for computing vector-jacobian products which provides it with an advantage. Weaknesses: The paper is making confusing claims or is making a misclaim. Line 38 from submission - “Naively applying these identities leads to a quadratic complexity with the length of the trajectory, which is described in prior works [18, 20]” - the citations are PDP and safe PDP. Line 46 from submission - “Furthermore, we show that the computation of these derivatives is linear with trajectory length, contradicting claims in prior works” - as far as I can tell, the previous backward and forward recursion also scales linearly with trajectory length. The PDP and safe PDP do not explicitly state quadratic complexity with trajectory length anywhere. Excerpt from PDP: “Third, in the backward pass, unlike differentiable MPC which costs at least a complexity of $O((m+2n)^2 T^2)$ to differentiate a LQR approximation, PDP explicitly solves for first derivative by an auxiliary control system, where thanks to the recursion structure, the memory and computation complexity of PDP is only $O((m+2n)T)$.” Excerpt from this submission Line 82 - “While the derivative computation in PDP is linear with the number of timesteps, it is inherently a serial calculation, requiring a recursion through time” In conclusion, the paper is making contradicting claims about advantage over previous work. Otherwise, the paper is not communicating the claims correctly. I believe whatever computational benefit is reported is because of the ability to parallelize the computation and the final numerical results are above average but not excellent. The numerical stability problem with PDP is not consistently seen and it is likely that it can be quelled by better numerical conditioning. Additive cost functions with quadratic structure is only one type of optimal control problem. The block diagonal structure is dependent on this assumption. There is a discussion in future work about extensions to non-additive cost functions. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1) While citing results from Gould et. al such as equation (5) and Proposition 1, the authors are encouraged to provide more explanations about whether the results are a direct application and provide more context in the paper. 2) It is not communicated clearly what is the final take away from sections 4.3 and 4.4. It looks like by assuming H is non-singular, it is possible to do the computation faster using this method. Will this method also result in numerical issues if H is improperly conditioned? If adding the proximal term is the solution, why does a similar type of solution not work for PDP? In other words, add a numerical conditioning to derivatives of c? 3) This paper needs to be self-contained and there is too much dependence on referring to previous papers overall. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors have discussed the limitations in future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate you taking the time to review our paper and drawing our attention to specific claims within it that appeared unclear. In particular, we apologise for the confusion regarding our comments around quadratic complexity -- we were not intending to make a misclaim here. > **The paper is making confusing claims or is making a misclaim.** To clarify, our comments around quadratic complexity do not relate to the PDP and Safe-PDP methods themselves. They relate to comments in the PDP and Safe-PDP papers about the quadratic complexity of differentiable MPC and CasADi. Reading our submission again, we recognize that this was unclear and apologize for the confusion. The comments that we are referring to are: * Section 7 in the PDP paper (NeurIPS proceedings version): *"unlike differentiable MPC which costs at least a complexity of $O((m+2n)^2 T^2)$ to differentiate a LQR approximation"* * Section 8 of the Safe-PDP paper (NeurIPS proceedings version): *"Specifically, Safe PDP has a complexity of $O(T)$, while CasADi and Differentiable MPC have at least $O(T^2)$. This is because both CasADi and differentiable MPC are based on the implicit function theorem [75] and need to compute the inverse of a Hessian matrix of the size proportional to $T \times T$.".* The second comment from the Safe-PDP paper refers to methods using the implicit function theorem (IFT) requiring a matrix inversion of the differential KKT system which is $O(T^3)$ in general. We are showing in our contribution that we can use IFT identities (derived from the same differential KKT system with variable elimination) presented in Gould et al., and recover $O(T)$ complexity. This time complexity does not sacrifice expressiveness; we differentiate through the exact same class of optimal control problems described in the PDP and Safe-PDP paper. We will include the specific quotes from the PDP/Safe-PDP paper to clarify our actual claims when we update the manuscript. We are simply refuting that methods based on the IFT require an $O(T^3)$ matrix inversion. In fact, we show that using the IFT yields significant benefits to computation time and numerical stability, especially for problems with inequality constraints. > **This paper needs to be self-contained and there is too much dependence on referring to previous papers overall.** We will improve the self-sufficiency of this paper as described in the global response. We will also better clarify that equation (5) and proposition 1 are taken directly from Gould et al. > **I believe whatever computational benefit is reported is because of the ability to parallelize the computation and the final numerical results are above average but not excellent.** Please see our global response on the additional experiments. We hope this improves our results beyond above average! > **The numerical stability problem with PDP is not consistently seen and it is likely that it can be quelled by better numerical conditioning.** We agree that techniques can be used to improve the stability of PDP, for instance using robust matrix factorizations (e.g., SVD instead of Cholesky/LU). However, we believe that alone cannot significantly improve the performance of Safe-PDP for hard constraints. In fact, in the Safe-PDP implementation, SVD is used for matrix inversion/factorization, whereas we simply use an LU-based solver. We believe our experiments show conclusively that the difference between Safe-PDP and IDOC is significant where hard inequality constraints are present, and doubly so given the simple nature of the optimal control problems we evaluate on. To further reinforce our claims, in the global response and attached pdf, we provide a brief discussion on the importance of using hard constraints instead of log-barrier functions. > **Additive cost functions with quadratic structure is only one type of optimal control problem. The block diagonal structure is dependent on this assumption. There is a discussion in future work about extensions to non-additive cost functions.** Additive cost is not the most general formulation, but encompasses many optimal control problems and enables the use of Bellman's principle of optimality. We also point out that PDP and Safe-PDP will not work without an additive cost structure. Recall that our formulation can also extend to include a final cost defined over the whole trajectory, which is not easily achieved under PDP/Safe-PDP. We also note that a quadratic cost structure is not required to use our proposed method, only that the cost function is twice differentiable. > **It looks like by assuming H is non-singular, it is possible to do the computation faster using this method. Will this method also result in numerical issues if H is improperly conditioned?** Your reasoning is correct, non-singularity of $H$ will allow faster and more stable computation. We accept that poor conditioning in $H$ will cause issues for our method, but these issues will also be problematic for PDP and Safe-PDP since they use (and invert) the same blocks used to build $H$ in their calculations. > **If adding the proximal term is the solution, why does a similar type of solution not work for PDP? In other words, add a numerical conditioning to derivatives of c?** A small proximal term is important for the quadrotor experiment, where $H$ is inherently singular. The other experiments on the other hand, avoid using the proximal term altogether. Therefore, the numerical differences cannot be attributed to the addition of the proximal term alone. We hope by addressing your major concerns around the claims, you will consider raising your score. --- Rebuttal Comment 1.1: Title: Acknowledgement Comment: I have read the author response and other reviews. The authors have given a fair response to my questions. The contribution of the paper is better appreciated after this. Many of the differentiable trajectory optimization methods do run into numerical issues. However, I believe the paper can still be improved upon and presented better. Further, it helps to include a further discussion on ILQR/DDP literature from robotics and control. e.g. [1]. At this time, I increase my score to 5. [1] Roulet, Vincent, et al. "Iterative linear quadratic optimization for nonlinear control: Differentiable programming algorithmic templates." arXiv preprint arXiv:2207.06362 (2022). --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you for acknowledging our response and for raising your score accordingly. We will action feedback from the reviewers to improve the presentation of the paper. We are happy to include more discussion around shooting methods used in robotics and control like DDP (and iLQR). We would like to emphasize that the derivation our method (and any method that differentiates through optimality conditions) **does not depend on the solver used to solve the optimal control problem**, as long as the solver successfully finds an optimal point. It is therefore imperative that a robust solver with sufficient convergence guarantees is used to ensure that the trajectories returned by the solver are in fact, optimal. For this reason, we picked IPOPT to solve our control problems (consistent with PDP/Safe-PDP). This is due to IPOPT's primal-dual line search filter approach w/second order corrections, which has global convergence properties. As far as we are aware, DDP style solvers have not quite reached the same robustness. We will include this information in the updated version of the manuscript.
Summary: Prior work shows that computing trajectory derivatives scales quadratically w.r.t to timesteps. This work proposes that trajectory derivatives scales linearly w.r.t. to timesteps, which can be parallelized, resulting in decreased computation time and increased numerical stability. Strengths: - Good paper structure, reasonable flow. - Evaluation shows that the gradient computation is numerically stable, which leads to stable training for inverse RL and imitation learning tasks. Weaknesses: - Section 3.1 is not needed; readers are familiar with matrix and vector derivatives. - Good to move derivative w.r.t to trajectory in Section 4 as a motivation for this work. (i.e. move to Section 3) - Need to better structure Section 4; state in beginning of 4.2 the goal is prove that computing the trajectory derivative relies on a block matrixes. - Assumes theta is a vector, does not apply for non-linear approximations such as DNNs - The main benefit of this work is being able to parallelize the computation of trajectory derivatives. There seems to be a lack of evaluation on this. (outside of 1(b)). Need to show total training time. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: -How does this method scale with longer and longer trajectories? Need to see X axis as trajectory length and Y axis as solving time. - Confused, why prior work fail to identify the block structure? It seems to directly follow from prior work (Gould et al.) Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We would like to emphasize not only the decreased computation time, but the significant performance gains on differentiating through inequality constrained problems without resorting to a log-barrier approximation. Additional compute and scalability experiments are presented in the global response. We will make your suggested changes to the document structure to enhance readability. > **Assumes theta is a vector, does not apply for non-linear approximations such as DNNs** While we assume $\theta$ is a vector, this does not restrict its generality. For the DNN case you mention, $\theta$ would represent the parameters of a neural network "unrolled" into a vector. Importantly, we can define expressions such as $df/d\theta$ even for a DNN, which can then be used to compute the expressions in Eq. 4 in the original manuscript. Does this address your concern? > **Need to show total training time** The only difference between IDOC and PDP/Safe-PDP is the computation of the backward pass. The total training time is heavily dependent on the solver used in the forward pass (e.g., IPOPT, DDP/iLQR solver, MPC PyTorch solver that runs on GPU). We believe our current comparison between methods is fair (with a very similar implementation in Python/Numpy across methods), and do not wish to muddy the comparison by specifying a full learning system. However, to make our comparisons even more comprehensive, we have provided additional compute and scalability experiments in the global response. > **Confused, why prior work fail to identify the block structure? It seems to directly follow from prior work (Gould et al.)** We're not sure how to answer this question, but we believe we are the first to leverage the Gould et al. identities. Admittedly, there is a bit of confusion when reflecting on prior works. Differentiable MPC [2] did correctly identify the block sparse structure in the KKT system and formulated an auxiliary LQR problem to solve it, as reviewer kJe4 correctly pointed out. The Gould et al. identities are different from solving the KKT system as in [2] however, since they are derived by eliminating the Lagrange multipliers (see Section 4.3 in our original manuscript). This auxiliary LQR approach from [2] is very similar to the approach proposed in PDP and Safe-PDP. However, Safe-PDP claimed that [2] (which uses the implicit function theorem) yields a trajectory derivative that scales quadratically with horizon $T$, which was an important motivation for our work. We will more clearly clarify the difference in approach of our method compared to existing work (e.g., PDP/Safe-PDP/DiffMPC) in the revised manuscript. Rather than simply identifying the block structure from the identities (recently) derived in Gould et al., we believe exploiting the structure of the resulting equations to demonstrate faster computation and substantially more stable trajectory derivatives across a range of problems (especially through inequality constrained problems) is a significant contribution. > **-How does this method scale with longer and longer trajectories? Need to see X axis as trajectory length and Y axis as solving time.** See the global response and attached pdf for additional results to more comprehensively evaluate the scalability w.r.t. trajectory length and problem size. We hope you will consider raising your score given we will address your editorial comments and have provided additional scalability results.
Rebuttal 1: Rebuttal: # Global Response Overview We thank the reviewers for providing detailed comments and feedback on our work. We are pleased the reviewers appreciated the computational and numerical performance improvements compared to existing state-of-the-art offered by our use of (structured) implicit differentiation and vector-Jacobian products for optimal control (IDOC and IDOC VJP, respectively). We address a number of common concerns raised by multiple reviewers in a series of posts. Specifically, we address * Additional results around speed up and numerical stability * Emphasis around handling problems with inequality constraints * Making the paper more stand-alone and leave responses to specific comments from individual reviewers as a reply to their respective posts. ## Additional Numerical Results We constructed a large, synthetic example to more comprehensively test the computation time saving and numerical stability of IDOC. We sample the blocks, comprising terms in Eq. 4 in the original submission, randomly such that they are 1) large, and 2) poorly conditioned to test the effect of parallelisation and numerical stability, respectively. In summary, our results indicate: * IDOC, IDOC VJP and PDP **scales linearly** w.r.t. horizon length $T$ and is approx. 10x faster for $p=2000$. * IDOC VJP scales an **order of magnitude** better than both IDOC full and PDP w.r.t. the number of parameters $p$. * IDOC is **more numerically stable** than PDP and degrades more gracefully as the numerical conditioning of the blocks deteriorates. See the 1 page response pdf for figures (Fig. 1) and more experimental details. ## Importance of Handling Inequality Constrained Problems We would like to emphasize the significance of the improvement IDOC affords over Safe-PDP for differentiating through optimal control problems with inequality constraints. We acknowledge that the log-barrier approach proposed in Safe-PDP yields similar imitation loss IDOC. However, there are additional benefits to avoiding a log-barrier approach and differentiating through the constrained problem. * Recovery of the correct constraint function parameters when learning from demonstration data (see Fig. 2 in the response pdf). * Avoids necessitating initializing the forward pass with a feasible trajectory (otherwise the log-barrier function is undefined). ## Difficulty of Experiments Some reviewers questioned the difficulty of the experiments. While we agree that scaling up these control problems to more difficult settings (e.g., the DM control suite) is of interest to the community generally, our particular contribution is largely analytical and relates only to the gradient computation. We believe the performance gain of IDOC over PDP/Safe-PDP are already significant, especially when considering the simple nature of the problems we evaluate on. We hope the additional challenging synthetic experiments sufficiently substantiate our claims around numerical stability and parallelization. ## Improving the Stand-Alone Quality of the Paper We will make the following changes to our paper to make it more stand-alone. * Describe for each experiment the dynamics equations and cost function parameterization in the appendix. * Describe the forward solver used (IPOPT) and associated hyperparameters during learning. * Add more visualizations to the appendix around the experiments. * Better place the work in the broader RL literature, rather than simply comparing against PDP/Safe-PDP. We believe IDOC is well suited to model-based RL approaches such as the ones proposed in [1], which solve an OC problem (described in Equation 2 in our submission) to produce actions (implicit policy). We hope our work will encourage future work around this direction. ### References [1] Romero, A., Song, Y. and Scaramuzza, D. (2023). *Actor-Critic Model Predictive Control.* arXiv preprint arXiv:2306.09852. Pdf: /pdf/8bf9ed88870fc076acaeaa63cdb1fc958115f2fe.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: Differentiating through optimal control problems to learn various components, such as the dynamics model or cost function, is a promising method for inverse reinforcement learning (IRL) or incorporating more structure in learned control policies. The central component of these approaches is to differentiate through optimality conditions, such as the KKT conditions or Pontryagin's maximum principle (PMP). Methods which differentiate through the PMP scale linearly with the planning horizon by constructing an auxiliary control system solved in the backward pass. Prior work has argued that this is not the case for methods which differentiate through the KKT conditions due to a large matrix inversion. Instead, this paper shows that this is not true if one properly accounts for the block structure and sparsity patterns of the matrices. The authors show that their approach also scales linearly with horizon and has more opportunity for parallelism to enable faster gradient computation. They compare to Pontryagin Differentiable Programming (PDP), a method derived from PMP, on a number of standard benchmark problems in the case of inverse reinforcement learning. Both methods perform similarly most of the time, with PDP sometimes failing, potentially due to issues with numerical stability. They also show this gap widens when inequality constraints are introduced, with their method yielding significantly better gradients and imitation loss. Strengths: - Improving the scalability of implicit differentiation for learning parameters of optimal control problems is an important problem with applications in IRL, system identification, and structured feedback policy classes for reinforcement and imitation learning. - The paper is well organized and clearly written. It does a good job explaining the novelty and results and provides enough information to support its claims. - This paper shows that computing gradients through the KKT conditions for general optimal control problems with inequality and equality constraints can also scale linearly with horizon when we properly account for the matrix structure. Additionally, they show that methods which differentiate through the PMP conditions are equivalent to their approach, only differing in the use of a recursive rule for gradient computation. - Parts of the gradient computation in their method is parallel across time steps, unlike prior work which is entirely sequential. This allows them to compute gradients and vector-Jacobian products much more quickly. - Gradients appear to be more stable, especially in the case of inequality constraints, compared to PDP and its extensions. The computation time for the backwards pass is also significantly faster for the proposed method over PDP. This may enable scalability to longer horizon problems. - Unlike prior methods which efficiently differentiate through the KKT conditions, such as [2], they do not rely on differentiating through an approximation of the non-convex problem. Instead, they compute gradients through the original problem, which may have benefits in terms of gradient quality. Weaknesses: - The central goal of the paper is to improve the scalability and numerical stability of gradient computation for long horizons. However, there is no evaluation of runtime or gradient stability across different horizon lengths. Instead, the benchmarks use a fixed horizon, which is of moderate length. It would strengthen the paper to see a breakdown of how the improvements over PDP scale with the horizon length of the problem, and if these trends carry over to even longer horizons than currently considered in the paper. If PDP truly scales worse due to numerical stability issues, then its performance should get worse with longer horizons while the proposed method does not. - There is no discussion of what solver is used in the forward pass for the experiments and how the Lagrange multipliers are found for gradient computation. Even if the paper reuses the methods from the PDP paper, the paper should still be stand-alone in that it contains these details in the appendix. - The paper says that differentiable MPC [2] is limited to affine-quadratic systems, which is not true. By using iLQR, it is able to handle nonlinear dynamics and non-convex cost functions. The paper also shows how to incorporate box constraints on controls. However, the proposed approach is more general in that it can handle arbitrary constraints. This should be fixed in the final paper. - This method is not the first to exploit structure in the KKT conditions to scale linearly with horizon. Despite the arguments in [18, 20], differentiable MPC [2] does not involve a large matrix inversion. Instead, it scales linearly with horizon by solving an auxiliary LQR in the backwards pass, similar in spirit to PDP in [18, 20]. While the proposed method is more general, there should be some discussion of this relationship. It would also strengthen the paper to include it as a benchmark given that it is also derived from the KKT conditions. It would especially be interesting to see how the quality of gradients and runtime of the backwards passes compare. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How does the gradient stability of the proposed method compare to PDP and its related methods as horizon increases? - How does the proposed method fair with much longer horizons than those considered in the experiments? - What is the solver used in the forward pass and how are the Lagrange multipliers found? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: There is some discussion of how the bottleneck on speed is now in the forward pass, given that the backward pass has been significantly sped up. The speed of the forward pass will also depend on the choice of solver for the constrained optimization problem. And there is some discussion of how the opportunities for parallelism in the backward pass are limited due to the availability of solvers for general block tridiagonal linear systems. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your appreciation of the contributions proposed in our work. We would like to address your concerns. > **It would strengthen the paper to see a breakdown of how the improvements over PDP scale with the horizon length of the problem, and if these trends carry over to even longer horizons than currently considered in the paper.** We have provided additional insight and analysis around runtime scalability in the global response. In short, we believe we have firmly demonstrated this by evaluating both computation time and numerical stability on a synthetic example with varying $T$ up to $T=1000$. > **There is no discussion of what solver is used in the forward pass for the experiments and how the Lagrange multipliers are found for gradient computation.** We are currently using the primal dual interior point solver IPOPT which solves for optimal trajectories and provides Lagrange multipliers. We will include these details in the paper and make our paper more stand-alone. We will also mention that we can compute the Lagrange multipliers given the solution as in PDP and Safe-PDP. > **Despite the arguments in [18, 20], differentiable MPC [2] does not involve a large matrix inversion.** Thank you for mentioning this, we agree completely after revisiting the paper. We will reflect these changes in the manuscript, notably that the advantage of IDOC and Safe-PDP is around handling more general constraint functions and furthermore, that DiffMPC is very similar to PDP in that an auxiliary LQR problem is solved. Interestingly, the method proposed in DiffMPC solves the VJP directly, and as such as is a worthwhile additional comparison. However, we note that in Module 2 in the DiffMPC paper for handling non-convex problems (under Backward pass, steps 4-5), a **third-order derivative** must be computed. Specifically, this is the derivative of (w.r.t. $\theta$) of the derivative of $\ell$ w.r.t. $H^n_\theta$. This will significantly affect computation time of DiffMPC for large-scale problems. While we have not had sufficient time to perform the comparison during the rebuttal period, we will attempt to perform a comparison during the discussion period. --- Rebuttal Comment 1.1: Comment: Thank you for your response! I believe that the additional experiments and details will strengthen the paper. If the additional comparisons to DiffMPC could make the final paper, including computation time and performance, that would be great but not necessary in my opinion. I will adjust my score accordingly.
Summary: The paper introduces a method for calculating analytic trajectory gradients in constrained optimal control problems using implicit differentiation, with following contributions: * Shows that computation of these derivatives can be linear in trajectory time-steps, utilizing the structure in the matrices in the gradient computation. * Shows how to parallelize the gradient computation for reduced overall computation time and better numerical stability. * Shows direct computation of vector-jacobian products for find optimal trajectories with some outer loss. The paper utilizes results from previous works [15,18] and uses insights from the those results to build incremental contributions. Though incremental, the insights can be practically useful. Strengths: Strenghts: The contribution of the method can be practically useful for trajectory gradient methods allowing for faster computation. Originality - The insights and the contributions for the paper are original to best of my knowledge. Using block sparse structure of gradient computation for linear time complexity in time-steps is the main original idea. Quality - The quality of contribution is moderate. Since the paper is mainly is analytical, built on insights based on previous results and the speed-up is only 2X. Clarity - I believe the writing of the paper can be more clear, especially section 3 onwards. Subsection 4.2 needs more work to be easy to parse. Significance - Significance is moderate. Since the speed is only 2X, and also in figure 3, the improvements over baselines are minor in 3 out of 4 environments. In Figure 4 as well, the final final imitation loss seems very close for different methods, though the gradient quality indeed seems better. Weaknesses: Major Weaknesses: 1. The paper lacks comparison on other experimental setting such as sysID and control from the baseline paper [18]. 2. It seems the relevance for the paper is mostly in the setting where trajectory gradients wrt to model parameters or objective parameters is required such as LfD. For a holistic comparison, there should be benchmarking wrt to current important methods specialized for SysID (model learning in MBRL) or inverse RL. Minor Weaknesses: 1. Improve the legends in figure 3, and enlarge figure 1(a). In general, all the figures needs improvement. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: I have a few questions related to the significance of this work. If you can help me with following questions to better place the paper in the field of optimal control and RL. 1. Is the method introduced only limited to LfD setting or are there any other setting in RL or optimal control this method can be used for? 2. Can you place this work with respect to model-based/model-free/inverse RL methods? i.e if this method or any part of this method can be used for policy learning or planning or model learning? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Limitations: * No clear experiments about how the method scales with more challenging and complex environments such as DM control suit, etc. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We will gladly address editorial comments around legends and writing. In addition, we will address the concerns you have raised under both **Strengths** and **Weaknesses**. > **Quality - The quality of contribution is moderate. Since the paper is mainly is analytical, built on insights based on previous results and the speed-up is only 2X.** Whilst acknowledging that our contribution is largely analytical, we note that it holds for a very general class of problems with the state-of-the-art split over two papers (PDP and Safe-PDP). Furthermore, we believe clarifying misconceptions around the implicit function theorem (see our response to reviewer tsv7) is of value to both the learning and control communities. We hope our new numerical experiments in the global response changes your opinion around the relative speed up! > **Significance - Significance is moderate. Since the speed is only 2X, and also in figure 3, the improvements over baselines are minor in 3 out of 4 environments.** Furthermore, our extra numerical results presented in the global response now clearly demonstrate a significant numerical performance improvement over the state-of-the-art on large-scale challenging problems (now an approximate 10x speedup, with further gains possible for larger problem sizes). Similarly, our extra numerical results further emphasise the significant gains in gradient quality over state-of-the-art achieved by differentiating through problems with inequality constraints without resorting to approximate log-barrier problems. These stability improvements are on top of the observed speedups. > **In Figure 4 as well, the final final imitation loss seems very close for different methods, though the gradient quality indeed seems better.** Additional analysis exploring the benefits of avoiding the use of log-barrier approximations inherent to Safe-PDP is provided in the global response. Importantly, this analysis extends beyond imitation loss, highlighting that avoiding the log-barrier allows for the correct recovery of the true constraints and system parameters, which cannot be observed from plots of imitation loss alone. > **It seems the relevance for the paper is mostly in the setting where trajectory gradients wrt to model parameters or objective parameters is required such as LfD. For a holistic comparison, there should be benchmarking wrt to current important methods specialized for SysID (model learning in MBRL) or inverse RL.** We believe the CIOC problem we evaluate on where the dynamics and the cost are jointly learned is more difficult than just the SysID and the IRL tasks proposed in the PDP paper. In fact, the gradient computations which arise from the SysID task amounts to just backpropagating through the dynamics equations. This renders differentiating through the KKT/PMP conditions completely unnecessary. This observation also holds for the control and planning tasks. However, we are happy to add additional results to the appendix around IRL (known dynamics) and SysID (known cost) for completeness. > **Can you place this work with respect to model-based/model-free/inverse RL methods? i.e if this method or any part of this method can be used for policy learning or planning or model learning?** Thank you for suggesting this. LfD is the most obvious task to apply differentiable COC, however we have added some additional commentary in the global response. We believe our method can be used anywhere a trajectory derivative is required. For example, we provide a reference to an RL work which learns an implicit policy, defined as the solution to an optimal control problem. We hope our method will further encourage and enable the learning of these so-called implicit policies. --- Rebuttal Comment 1.1: Comment: I have read the rebuttal. Thanks for the comments and added experiments. I believe that added experiments adds extra value to the paper and strengthens it. Though I believe overall - the presentation, writing, method writing, experiments - need extra work. I also think the contributions of method are more of computational insights i.e. one of the main result/contribution of the paper - real fast computation of derivatives in IDOC VJP - is built from the insight about block structure in results derived in previous work and then doing multiplication from left-to-right. Having said that, I believe that these results can be helpful for the community and should be disseminated. But i will encourage the authors to revise the paper with all the mentioned feedback from reviewers to have more appealing article and submit again. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you for your response. We appreciate you supporting our work and for agreeing that our results can be helpful for the community and should be disseminated. We’re hoping to do this sooner rather than later and will certainly incorporate all reviewer feedback in subsequent versions of our paper.
null
null
null
null
On the Connection between Pre-training Data Diversity and Fine-tuning Robustness
Accept (spotlight)
Summary: The paper introduces an empirical study for visual pre-training. By focusing on the pre-training, the authors conduct wide-range experiments through (i), data quantity, (ii) label granularity, (iii) label semantics, (iv) image diversity, (v) data sources. The empirical study considers how to use pre-training datasets (both real and synthetic datasets), fine-tuning (data distribution), and test tasks (image classification, out-of-distribution detection). In the experimental section, the study disappears several findings for visual pre-training. Strengths: - The novelty of the project is experimental findings with a wide-range of visual pre-training. The paper reveals five items in effect of data quantity (Section 4.1), label granularity (Section 4.2), label semantics (Section 4.3), image diversity (Section 4.4), and pre-training with different data sources (Section 4.5). Weaknesses: - How about implementing vision transformer (ViT) architecture? According to the experimental setup in Section 3, almost all of the network architectures are based on CNN architecture. Although some additional results are reported CLIP, one of the current visual architectures is undoubtedly ViT architecture. Since many architectures are implemented in the experiments, it doesn't seem that there is a lack of computing resources for the experimental conduction. - The reviewer would touch the results shown in Figure 4. Does the graph show the highest score in the upper right corner? The claimed point with 25k pre-training samples (green line) is better than the baseline with 129k iWildCam images, but still wouldn't it be better to have more data such as 100k and 150k in terms of robustness? With this results, can we conclude tha "we do not need a lot of pre-training data to see significant robustness gains" (l.46-47)? - Related to Section 4.5.1 and Figure 10, the PixMix framework [Hendrycks+, CVPR22] with the combination of real and synthetic images improves a robustness score. Therefore, for robustness performance, the present paper should implement PixMix framework rather than that of single usage of fractal images. [Hendrycks+, CVPR22] Dan Hendrycks et al. "PixMix: Dreamlike Pictures Comprehensively Improve Safety Measures" in CVPR, 2022. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - See above-mentioned questions Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: There are no negative limitations and societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and feedback. **“How about implementing vision transformer (ViT) architecture?”** We understand that ViTs have gained popularity in recent years. We include results with ViTs in Figure 2 of our rebuttal PDF, specifically ViT-B/32, pre-trained in both supervised and self-supervised (SimCLR) manners. We show that our data quantity result holds for the ViT-B/32 architecture, and that supervised and self-supervised pre-training yields the same linear trends. Due to time constraints, we could not incorporate other ViT architectures for our current suite of experiments, however we will include the ViT-B/32, B/16, and L/14 architectures for subsequent versions of the paper. **“Does the graph show the highest score in the upper right corner?... With this results, can we conclude that ‘we do not need a lot of pre-training data to see significant robustness gains’ (l.46-47)?”** To clarify your question, the highest score is in the upper right hand corner, and we plot in-distribution and out-of-distribution performance of the same model. Perfect robustness is having the same performance on both distributions (the dotted black line in our plots). However note that a low-accuracy random classifier would also have perfect robustness, so there’s some nuance to this statement. The statement in Lines 46-47 does not contradict our findings regarding the impact of pre-training data quantity (Section 4.1). Of course pre-training on significantly more data will lead to further robustness improvement on downstream tasks (e.g. 150K versus 25K trend lines, see Figure 4). However, in Lines 46-47, we want to highlight that even with access to limited pre-training data (e.g. 25K), it is still beneficial to do pre-training, rather than training from scratch on the target task. **“For robustness performance, the present paper should implement PixMix framework rather than that of single usage of fractal images.”** Thank you for the pointer to this paper. The goal of our experiments with data sources (Section 4.5) is to disentangle the impact of the type of images (e.g. synthetic vs real), as well as the varying characteristics of the data distributions (e.g. long-tailed vs class-balanced). We acknowledge that there exist strategies to better combine signals from both real and synthetic sources to improve robustness. However, exploring data mixing strategies is beyond the scope of our investigation. --- Rebuttal Comment 1.1: Title: To Reviewer t8Yd: Please respond to the author rebuttal Comment: Dear Reviewer t8Yd, The deadline for author discussion period is approaching soon. Please respond to the author's rebuttal and indicate whether your concerns have been addressed. Thank you! -AC --- Rebuttal 2: Title: Response to Author Rebuttal Comment: Thank you so much for the authors. The authors response has addressed my questions. I will lean to the accept-side paper rating. Thanks again for the author's efforts.
Summary: This paper delves into the impact of pre-training data construction on fine-tuning robustness, encompassing aspects such as dataset size, class granularity, in- and out-class diversity, class similarity, and the use of synthetic data for pre-training. The evaluation metric employed is the in- and out-of-distribution testing performance, assumed to follow a linear trend. The study investigates changes in the slope of this trend to assess the influence of the mentioned factors. The findings suggest that pre-training data quantity and label granularity significantly affect fine-tuning robustness. Strengths: This paper presents a comprehensive exploration of the relationship between pre-training datasets and fine-tuning robustness. The novel perspective on dataset construction offers valuable insights into deep learning models. Weaknesses: A potential limitation lies in the reliance on a single metric to measure effective robustness, which might not provide a fully comprehensive evaluation of the connection between pre-training datasets and fine-tuning robustness. The figures' lines and points lack sufficient definition, which makes it somewhat challenging to grasp their full meaning. Some lines appear to be extrapolated, raising concerns about their validity beyond the tested environments, as observed in Figure 4, 5, 6, 7, 8, 10, and 11. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: In addition to the general limitations mentioned in the ''Paper Weakness'' part, I have several additional questions: 1. Incorporating transformer networks into the experiments would be beneficial, considering their increasing popularity in vision tasks. 2. Addressing the influence of networks in the experiments, particularly in Sec. 4.1 "Effect of Data Quantity," is crucial. Larger models might suffer from reduced training dataset sizes, while smaller models could perform well under such circumstances. It is essential to mitigate this influence, as the plotted performance directly relates to different models. 3. The absence of a discussion on the influence of long-tailed data distribution is noteworthy. Understanding whether imbalanced pre-training datasets affect fine-tuning robustness is crucial. Treating data distribution merely as a confounding effect might not be sufficient. 4. In some cases, such as Figure 4 and 11, the linear trend is not readily apparent, and the scattered data points seem to follow more irregular patterns. Further clarification on these trends and their implications would be helpful. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The authors have addressed some limitations of their work, and there are additional suggestions for improvement in the 'Paper Weakness'' part and "Questions" part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks to the reviewer for providing valuable suggestions on how we can improve our work. **“The reliance on a single metric to measure effective robustness, which might not provide a fully comprehensive evaluation of the connection between pre-training datasets and fine-tuning robustness.”** Effective robustness trends have been shown to hold for a variety of interventions, including training method, hyperparameters and dataset size [2]. Besides, robustness could be defined in many different ways in existing literature. Given the various interventions we could perform on the pre-training distribution, we chose to focus on robustness to natural distribution shifts, which has become increasingly important in the context of large-scale pre-training. **“The figures' lines and points lack sufficient definition, which makes it somewhat challenging to grasp their full meaning.”** Thank you for the useful feedback. We provide background on the effective robustness framework as well as what the points represent in Section 2 (Background). We will improve the clarity as well as the context with respect to related work in subsequent revisions. The reviewer could also refer to previous work [1, 2] for more details. **“Some lines appear to be extrapolated, raising concerns about their validity beyond the tested environments, as observed in Figure 4, 5, 6, 7, 8, 10, and 11.”** Thank you for the important observation. Concerns around extrapolation are the reasons why we provide error bars for the reported linear trends in the form of bootstrap confidence intervals. Besides, across the range of performance currently attainable on the downstream task (iWildCam-WILDS), we find that the F1 scores resulting from our interventions overlap significantly, allowing us to fairly compare the robustness properties across different pre-training distributions that yield similar performance range. **“Incorporating transformer networks into the experiments would be beneficial, considering their increasing popularity in vision tasks.”** Per your comment, we report new results with ViTs in Figure 2 of our rebuttal PDF, specifically ViT-B/32, pre-trained in both supervised and self-supervised (SimCLR) manners. We observe that our data quantity result holds for the ViT-B/32 architecture. We will expand our analysis of ViT with more architectures in the subsequent versions of the paper. **“Larger models might suffer from reduced training dataset sizes, while smaller models could perform well under such circumstances. It is essential to mitigate this influence, as the plotted performance directly relates to different models.”** This is an interesting point to study. In Figure 1 (right) of the rebuttal PDF, we plot the residuals to the overall trend line of pre-training on a fixed data quantity (shown in Figure 4 of the main paper), for each architecture size. We find that the residuals do not exhibit any particular pattern and remain relatively consistent across different model sizes. **“The absence of a discussion on the influence of long-tailed data distribution is noteworthy. Understanding whether imbalanced pre-training datasets affect fine-tuning robustness is crucial.”** We do include a discussion of pre-training on long-tailed data in our paper. Our experiments involve pre-training on two versions of iNaturalist: the raw, long-tailed data (Figure 10) and the 1000-class, class-balanced subset of iNaturalist using its most frequent classes (Figure 11). In both cases, we find that iNaturalist behaves similarly to ImageNet in terms of the robustness properties. We discuss these findings in Lines 239 - 248 of our paper. **“In some cases, such as Figure 4 and 11, the linear trend is not readily apparent, and the scattered data points seem to follow more irregular patterns.”** For certain experiments (e.g. pre-training with 50K or 100K images, Figure 4), our interventions make little difference on the downstream robustness. Consequently, the linear trends end up overlapping. Per the reviewer’s concern, we report the linear fit of each trend line in Figure 4 and Figure 11 in the following table. We note that the $R^2$ is high for all of the linear trends. The “irregular data pattern” effect is largely a result of the intervention not producing a large change in linear fit, making our data points seem like a large cluster. We will include separate plots for each subsample experiment as well as add this $R^2$ analysis to the appendix. | Pre-Training Dataset | Samples | Coefficient of Determination ($R^2$) | | ------------------- | ------- | ---------------------------------- | | Imagenet | 5k | 0.634 | | ImageNet | 25k | 0.799 | | ImageNet | 50k | 0.874 | | ImageNet | 100k | 0.769 | | ImageNet | 150k | 0.862 | | iNaturalist | 5k | 0.689 | | iNaturalist | 25k | 0.741 | | iNaturalist | 50k | 0.873 | | iNaturalist | 100k | 0.824 | | iNaturalist | 150k | 0.908 | | iNaturalist | 150k | 0.908 | | ImageNet | 150k | 0.862 | | Diffusion | 150k | 0.906 | | FractalDB-1k | 150k | 0.565 | [1] Measuring robustness to natural distribution shifts in image classification. Taori et al., 2020. [2] Accuracy on the Line: On the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization. Miller et al., 2021. --- Rebuttal Comment 1.1: Title: Response to Author Rebuttal Comment: Thank you for your rebuttal; it has effectively addressed most of my concerns.
Summary: In this paper, the authors investigate the influence of pre-training data on the robustness of fine-tuning. The authors design several experiments by following a common pre-training, fine-tuning and evaluation pipeline. They found that the quantity of the pre-training data and the granularity of the label set are two most influential factors on the robustness of downstream fine-tuning. The authors further leverage synthetic data from Stable Diffusion to increase the effectiveness of the pre-training distribution along these two ablation axes. Strengths: 1. The paper is well-written and the motivation is clear. 2. The authors considers many factors that will possibly affect the fine-tuning robustness including data quantity, label granularity, label semantic, etc. 3. The conclusions of the paper are clear which is useful for follow-up research. Weaknesses: 1. The authors mainly focus on the impact of the pre-training data on the fine-tuning robustness, it is not clear if different fine-tuning methods (only fine-tune the last layer, etc) would change the conclusions of the paper. 2. The authors considers out-of-distribution generalization in this paper, it is unclear whether the conclusions also apply to other concepts of robustness, such as adversarial robustness. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In figure 3, why the fine-tuning epoch is 12? 2. Would the number of fine-tuning epochs affect the robustness of the pre-trained model? The authors found that by only using 5K samples for pre-training offers no robustness gain, this raises the question that if longer fine-tuning will also reduce robustness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback! **“It is not clear if different fine-tuning methods (only fine-tune the last layer, etc) would change the conclusions of the paper.”** This would be a useful direction for future investigation. Our paper includes experiments with linear probes for CLIPs pre-trained on different data sources (Figure 12). There we find that the fine-tuned models lie on the same linear trend as models pre-trained on ImageNet and fine-tuned end-to-end. We will include experiments with different fine-tuning methods in the subsequent version of our paper. We expect this to follow similar linear trends as these kinds of training interventions have been studied previously in [1]. **“The authors consider out-of-distribution generalization in this paper, it is unclear whether the conclusions also apply to other concepts of robustness, such as adversarial robustness.”** This is an interesting point. We are primarily interested in the study of natural distribution shifts as current paradigms for pre-training seek to have a very general backbone that encompasses a wide range of data, in order to be robust to many downstream natural distribution shifts (e.g. CLIP). We believe that having a better understanding of how to choose pre-training distributions to be robust to these kinds of shifts would be highly relevant. We agree that studying other robustness concepts is an interesting avenue for future work, but is not a focus of our current work, especially since adversarial robustness itself can be defined in many different ways in existing literature. **“In figure 3, why the fine-tuning epoch is 12?”** This choice follows from the experiment setup in previous work [1]. In Figure 3, using linear trends obtained from [1], we also study the impact of the number of fine-tuning epochs on how well the resulting model’s performance fits the linear trend, and find that varying the number of epochs does not change the linear trend significantly. We also include new experiments with longer fine-tuning (see Figure 1 (left) of the rebuttal PDF). **“The authors found that by only using 5K samples for pre-training offers no robustness gain, this raises the question that if longer fine-tuning will also reduce robustness.”** Per your comment, we have added new experiments fine-tuning models pre-trained on different data quantities (5K and 25K) for double the number of epochs. We observe that longer fine-tuning does not change the resulting trend lines. Refer to Figure 1 (left) of our rebuttal PDF for more details. [1] Accuracy on the Line: On the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization. Miller et al., 2021. --- Rebuttal Comment 1.1: Title: To Reviewer 6SSm: Please respond to the author rebuttal Comment: Dear Reviewer 6SSm, The deadline for author discussion period is approaching soon. Please respond to the author's rebuttal and indicate whether your concerns have been addressed. Thank you! -AC
Summary: This paper investigates the role of pre-training data diversity on fine tuning robustness. They vary various factors like label space, label semantics, image diversity, data domains, and data quantity of the pre-training distribution to investigate how these factors impact the robustness of the models. Some interesting insights include similar label semantics doesn't necessarily improve the robustness of the model and increasing per-class examples doesn't necessarily improve the robustness of the model. Strengths: - Provides insights on pre-training models like how various factors like data diversity, label space, label semantics etc affect the robustness of the models. Might be helpful for ML practitioners trying to decide what kind of data is best suited for pre-training - Experiments are thorough and clean. Paper is also easy to read highlighting main results. Weaknesses: - Major section of the work has been focused on supervised pre-training which is becoming less and less common with the advent of self-supervised learning methods. It would have been interesting to look at these of pre-training strategies in much more depth. - Only one downstream task considered in the experiments (iWildCam-WILDS). Hard to quantify if these results generalize to other datasets. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Can these results generalize to other downstream task as well? In this work only iWildCam-WILDS dataset is incorporated into results, it would be interesting to see if these results generalize to other datasets and datasets from different domains. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the effort they have put into reviewing our paper. **“It would have been interesting to look at self-supervised learning methods in much more depth.”** This is a good direction for future work. The focus of our current work is on supervised pre-training. Per the reviewer’s feedback, we have included experiments with SimCLR. In Figure 2 (right) of the rebuttal PDF, we include a comparison of a ViT-B/32 model pre-trained with SimCLR on ImageNet and the same architecture pre-trained on ImageNet in a supervised fashion. We see that models trained on the same dataset exhibit similar robustness trends despite the differing pre-training strategies. We note that certain properties of the pre-training distribution that we examined are harder to control/not applicable in the self-supervised learning regime (e.g. label granularity), which is why it is not the main focus of the paper. **“Only one downstream task considered in the experiments (iWildCam-WILDS). Do these results generalize to other datasets?”** This is a notable limitation which we acknowledge in the Conclusion section. We do study some of the properties of the pre-training distribution in the context of DomainNet (see Appendix E) and show that our main findings on the impact of label granularity and data quantity hold. Overall, characterizing distribution shifts where pre-training on non-web-scale datasets can provide a significant boost in effective robustness compared to training from scratch is an important direction for future work. The effect of varying pre-training distributions on downstream effective robustness has been studied at larger scales across a variety of distribution shifts (e.g. [1]), but at smaller scales the only shift that consistently sees substantial robustness benefits from small-scale pre-training is iWildCam-WILDS [2], making it a useful testbed. [1] Quality Not Quantity: On the Interaction between Dataset Design and Robustness of CLIP. Nguyen et al., 2022. [2] Accuracy on the Line: On the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization. Miller et al., 2021. --- Rebuttal Comment 1.1: Title: To Reviewer XBK9: Please respond to the author rebuttal Comment: Dear Reviewer XBK9, The deadline for author discussion period is approaching soon. Please respond to the author's rebuttal and indicate whether your concerns have been addressed. Thank you! -AC --- Rebuttal Comment 1.2: Title: Response to rebuttal Comment: Thanks for addressing all the concerns raised and adding experiments with SimCLR. I will like to keep my score.
Rebuttal 1: Rebuttal: We thank the reviewers for their thorough, insightful comments and have made revisions based on their feedback. We are glad they found the work well motivated, novel, and the contributions to be of value to the community. Here we include new results and plots to respond to some of the concerns by reviewers, see the attached PDF. We are happy to engage in further discussion and interested in any additional feedback the reviewers may have. Pdf: /pdf/6e1f32b84cec5f3b8fda1df332dd40ce40878d94.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
DoWG Unleashed: An Efficient Universal Parameter-Free Gradient Descent Method
Accept (poster)
Summary: This paper is concerned with parameter free - adaptive first order optimisation method, that is a method that achieves optimal rates of convergence in the class of function considered without having access to a priori quantities such as smoothness of the function or the minimum value of the function. The authors discuss on normalized gradient descent and they show that this method is adaptive to the smoothness of the function when the step-size is proportional to the distance to the minimizer. The main contribution of the paper is to introduce a new estimation of the distance to the minimizer that achieves optimal rate up to logarithm factor. The method is an improvement over the so-called distance over gradients (DoW) by introducing weights (DoWG) so that it gives more weight to the last gradients. Numerical experiments are provided, including comparison to Adam (which is not in the same category...). Strengths: I am not a specialist of the field so my contribution in this review was to check the proofs, also in the supplementary material. All the proofs are correct. The paper is globally nicely written, easy to read and accessible for an outsider of the field as I am. Although from the theoretical point of view, DoWG is incremental (correct me if I am wrong), numerical experiments tend to show that it is clearly more efficient practically. Although the resulting algorithm performs poorly compared to Adam, it seems a fair contribution on the understanding and the technics in the class of methods considered. Weaknesses: This work appears as an improvement over DoG. It is a bit surprising that the result on Normalized Gradient Descent is new; I was not able to find it in the literature although, again, I am not an expert in the field. The proposed method DoWG of using weights in front of the gradients is a natural idea. From the theoretical results, DoWG seems a rather incremental advance over DoG. The resulting algorithm is probably not going to have any impact on the optimization method in deep learning. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Could the rate obtained in the analysis of NGD be improved? - line 131: gives eta so that - line 554: Therefore. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: do no apply. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for your evaluation of our work. 1. "The resulting algorithm is probably not going to have any impact on the optimization method in deep learning." On the contrary, the method has been independently implemented in at least one GitHub repository with 500+ stars for fine-tuning stable diffusion models, which we became aware of only after the submssion. We cannot link the repository per NeurIPS rules, but we can provide it to the Program Chairs for verification. 2. On improving the rates for NGD: The rate for NGD in the non-smooth setting matches the optimal rate $\frac{GD}{\sqrt{T}}$ for first-order algorithms. In the smooth setting, we do not know if it can be improved. Our theory and experiments suggest the agreement is pretty tight, as we observe the effective stepsize indeed oscillates around $1/L$. --- Rebuttal Comment 1.1: Comment: Thank you for your answer.
Summary: The paper presents a modification to the recently proposed DoG algorithm to obtain a adaptive algorithm for the deterministic, convex and [Lipschitz or Smooth] setting that does not need any hyperparameter to achieve a convergence rate competitive with algorithms that know the problem-specific constants. The submission discusses the benefits of normalization, linking its behavior to the edge of stability phenomenon and providing an analysis that normalized gradient descent obtains the convergence of GD on smooth function with step-size 1/L without needing to know the smoothness constant. Strengths: The submission is well presented and I could understand the main objective of the paper at first read. The DoWG algorithm, and especially its proof, is simpler than the one found the prior work of Ivgi et al. (2023) on DoG, and would be an easier starting point for readers trying to understand algorithms that can adapt to the initial distance to the optimum. Weaknesses: Beyond the DoWG algorithm, the contributions are also already known in the literature. To the credit of the submission, this is clearly mentioned for the observation that normalized gradient descent on smooth functions leads to a similar behavior to the edge of stability phenomenon (Arora et al., 2022). However, the result that normalized gradient descent obtains the smooth rate can be found in the work of Levy (2017), which seems to have been missed. The submission also suffers from a lack of formalism on some of its claims. Some concepts such as weak and strong adaptivity, “adapting to geometry” and what is meant by the edge of stability are left vague, and would benefit from a formal definition and nuanced discussion. If those issues, detailed in the question section, are addressed in a revision, I will increase my score to 6. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: **Prior work** I found the discussion of the adaptivity to smoothness of normalized GD and the key inequalities (5—7) useful to understand how DoWG achieves a similar result. However, the paragraph prior to Theorem 4 should cite Levy (2017). Their analysis of what they call the AdaNGDk algorithm (an overly general form that recovers normalized gradient descent with a decreasing step-size of the order $1/\sqrt{t}$ for k=1), in Theorem 2.1 is the equivalent of theorem 4 in the current submission, except with a decreasing step-size of $1/\sqrt{t}$ rather than the constant but horizon-dependent step-size $1/\sqrt{T}$ used here. Kfir Levy, “Online to Offline Conversions, Universality and Adaptive Minibatch Sizes”, NeurIPS 2017 --- **Formal definition for some concepts** **Strong vs. Weak Adaptivity** The working definitions of “weak” and “strong” adaptivity are not clear from the current writing, or seems to lead to undesirable conclusions. I find the separation useful, and being able to formalize how sub-optimal the “adaptivity” of AdaGrad-Norm is in the smooth setting would be helpful. However, the definition of weakly adaptive requires that “[weak-adaptivity] just seeks non-divergence given stepsize misspecification. […] the algorithm’s objective is to ensure that the learning process does not result in divergence, even if the chosen stepsize is not optimal” (L145). This definition is problematic because an algorithm that does not move, or gradient descent with a step-size of 0, satisfies the above definition. Similarly, for strongly adaptive, the text requires that “an algorithm [is strongly adaptive if] it preserves the convergence rate of optimally tuned gradient descent without any manual tuning.” The proposed DoWG algorithm does not fit this description as the rate worse, if only by a log-factor. A formalization of the above that could work would be to say that an algorithm is strongly adaptive in a problem setting if it achieves the same convergence rate, up to polylogarithmic terms, as an algorithm that knows problem specific constants. Weakly-adaptive could similarly be defined by allowing for multiplicative polynomial factors. **Adapting to geometry** The term “adapting to geometry” is used in multiple places, but it isn’t clear what is meant by that statement. This term is often used in the literature as a way to convey the intuition of why a method is good, but with limited formalism. For example, Newton adapts to the geometry of the problem by using the Hessian, or AdaGrad adapts to the geometry of the problem by finding a preconditioner that is competitive with the optimal one in hindsight. On each usage of “adapting to geometry”, I do not understand what is meant,. I strongly suggest avoiding it and using a more direct description instead. - L32: “We say an algorithm is universal if it adapts to many different problem geometries or regularity conditions on the function f” (should be removed and only mention regularity condition) - L56: “In particular, such adaptive methods empirically show the ability to adapt to the local geometry of the problem (Cohen et al., 2022), especially compared to plain (stochastic) gradient descent (Pan and Li, 2022).” L107: “There are other justifications for why adaptive methods work outside of adapting to geometry.” - L246 “Therefore, we may may expect this to aid the method in adapting to the geometry of the problem once far away from the initialization x0.” --- **A mention of the difficulties of obtaining similar results in online learning would be helpful to the reader.** While the online-learning algorithms of Streeter and McMahan (2012), Orabona et Pal (2016) and Orabona and Cutkosky (2020) aim for a similar goal of not having to know the diameter of the set and the Lipschitz constant, the rates presented for the current algorithm or those in the prior works of of Ivgi et al. (2023) and Carmon and Hinder (2022) are not achievable in the adversarial that is common in online learning, see Cutkosky and Boahen (2016, 2017). Ashok Cutkosky, Kwabena Boahen, “Online Convex Optimization with Unconstrained Domains and Losses”, NeurIPS 2016 Ashok Cutkosky, Kwabena Boahen, “Online Learning Without Prior Information”, COLT 2017 --- **Question on Universality and the edge of stability** The convergence of normalized GD or DoWG are not discussed under strong-convexity, which I think should be mentioned given the focus on universal algorithms. Especially as some alternative algorithms such as the Polyak step-size do benefit from strong-convexity, whether in the smooth or Lipschitz+bounded set case. Given that the empirical results shown in Figure 2 are on a strongly convex problem and the effective step-size oscillates around a constant 2/L, I am interpreting it as the algorithm *not* achieving a linear rate (as the gradient norm should also go down as a linear rate, which does not seem to be the case if $\eta/\Vert\nabla f(x_t)\Vert$ is constant)? --- **typo?** - L80: "Orvieto et al. (2022) show that a variant of the Polyak stepsize with decreasing stepsizes can recover the convergence rate of gradient descent in the deterministic setting, provided the stepsize is initialized properly" -- this should be stochastic? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments and constructive criticism. We address your concerns below: 1. On normalized gradient descent: We thank you so much for pointing out that the algorithm of Levy (2017) reduces to normalized gradient descent. We were not aware of this work and will include it. We'd like to point out that because we do the offline case rather than online learning, our analysis is much simpler, as it is not a reduction from AdaGrad. The main contribution of our work is not the NGD analysis but the DoWG algorithm, and NGD's analysis is used as motivation. We will give the correct citation for the result on NGD's convergence in the revision. 2. On weak vs strong adaptivity: We agree that the definition as-is is not very rigorous. We shall modify the definition per your suggestion, that an algorithm must also achieve the same rate as some baseline (e.g. Gradient Descent) that knows the convergence parameters. Moreover, we will add that we allow for polylogarithmic factors: this is not an artifact of our analysis, but is tight, as in general without knowledge of problem parameters we have to suffer at least a $\sqrt{\log \log \frac{D}{d_0}}$ or a $\sqrt{\log \frac{D}{d_0}}$ factor, depending on the class of methods, see [1, Theorems 4 and 6]. 3. On "adapting to geometry": We agree that this term is too vague, despite its common usage in the literature on adaptive methods. Instead, we shall change it to adapting to smoothness/non-smoothness and/or regularity. We thank you for pointing this out. 4. On strong convexity: Indeed, due to the fact that NGD forces the effective stepsize to be 1/L, NGD does *not* adapt to strong convexity out of the box if we use a constant stepsize. Therefore a linear rate is not possible. We note that if problem constants are known, it is possible to extend to use an exponentially decreasing stepsize to get linear convergence out of NGD (see [2]). However, in general, it is easy to adapt to strong convexity (for any parameter-free algorithm with rates under smoothness and convexity) without knowledge of problem constants at all using the restarting scheme of [3]. The application of the restarting scheme of [3] to NGD/DoWG is straightforward, and we will include it in the appendix. 5. On the difficulty of obtaining the same results in the online learning setting: We agree that this would be instructive to include, the main difference is that in the offline setting, the adversary cannot change the function arbitrarily while just preserving the norm bound, as in the lower bound of [4]. Instead, the iterates have to come from the same function in response to the sequence of iterates picked by the algorithm. This allows for much improved rates. We will mention this in detail. [1] Konstantin Mishchenko and Aaron Defazio. "Prodigy: An Expeditiously Adaptive Parameter-Free Learner" [2] Damek Davis, Dmitriy Drusvyatskiy, Kellie J. MacPhee, Courtney Paquette. "Subgradient methods for sharp weakly convex functions". [3] Renegar, James, and Benjamin Grimmer. "A Simple Nearly Optimal Restart Scheme For Speeding Up First-Order Methods" Foundations of Computational Mathematics, vol. 22, no. 1, Mar. 2021, pp. 211–56. Crossref, https://doi.org/10.1007/s10208-021-09502-2. [4] Ashok Cutkosky, Kwabena Boahen. "Online Convex Optimization with Unconstrained Domains and Losses" --- Rebuttal Comment 1.1: Title: Thanks Comment: Thanks for the detailed response. My main concerns have been addressed. > The application of the restarting scheme of [3] to NGD/DoWG is straightforward, and we will include it in the appendix. The authors are welcome to include it, but I don't think the restart scheme extension is necessary. Mentioning that "universality" often includes the strongly-convex case, but does not apply to normalized GD (and would require an extension) should be sufficient. --- (Minor point; please prioritize other responses) Could you clarify the following? > On strong convexity: Indeed, due to the fact that NGD forces the effective stepsize to be 1/L, NGD does not adapt to strong convexity out of the box if we use a constant stepsize. Therefore a linear rate is not possible. If the effective step-size $\eta/\Vert\nabla f\Vert$ was forced to be 1/L, wouldn't the algorithm reduce to GD with step-size 1/L and get linear convergence? --- Reply to Comment 1.1.1: Comment: Thank you for following up with us. We are happy that your main concerns have been addressed. 1. (On universality) We will add that universality includes the strongly-convex case but doesn't apply to NGD. We agree it is important to point that out. 2. (On NGD) Because the effective stepsize stabilizes, this means that we cannot get much better than $\| \nabla f \| \approxeq \eta L$. What we observe (Figure 2 in the paper) is that when the gradient norms just keep oscillating. That is, the algorithm does a step that decreases the gradient norm, which results at the next step in a much larger effective stepsize, this in turn is too large (larger than the threshold $2/L$) and causes divergence, which forces the effective stepsize at the next iteration to be smaller, and so on. In this way, the effective stepsize oscillates around $2/L$, while at the same time not enabling linear convergence.
Summary: This paper considers the problem of optimizing a convex function over a convex, closed, and possibly compact set $\mathcal{X}$. In particular, they are interested in finding a first-order method which is (i) universal (i.e., the same algorithm can be used when the objective is Lipschitz or smooth), (ii) parameter-free (i.e., converges without any hyperparameter tuning w.r.t. problem parameters) (iii) has no search subroutines, and (iv) strongly-adaptive (i.e., preserves the convergence rate of an optimally-tuned GD without tuning). The authors propose an algorithm, Distance-over-Weighted-Gradients, which (essentially) achieves all of these desired properties. The two caveats are (i) the convergence rates have an extra log factor not present in optimally-tuned GD, and (ii) the convergence rates depend on the diameter of the constraint set $\mathcal{X}$ instead of on $D_0 = || x_0 - x^* ||$. Towards establishing this result, the authors prove that normalized GD universal but not parameter-free. Strengths: This paper provides an algorithm with the remarkable properties of being universal, parameter-free, (nearly) strongly-adaptive, and not requiring a search subroutine. Unless I have misunderstood, this seems to be the first such algorithm satisfying all of these properties simultaneously. Moreover, the analysis is well-written and easy to follow. I also found the discussion of the universal properties of normalized GD to be quite instructive and interesting. Overall, I think this is a very nice paper. Weaknesses: Much of the analysis appears to rely on or extend arguments from Ivgi et al., (2023). Further, the results in this paper are restricted to the deterministic setting, while the results of Ivgi et al. hold also in the noisy setting (albeit, assuming uniformly bounded stochastic gradients, and thus the results are restricted to the Lipschitz setting). Despite this, I still think the results in this paper are quite nice. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: The results of Ivgi et al. (2023) hold also in the stochastic setting (assuming stochastic gradients are uniformly-bounded). What are the main barriers to extending your results to the stochastic setting? Is there any similar result that you can obtain when, e.g., the noise of stochastic gradients is uniformly bounded? It might be beneficial to the paper to add a discussion on why extending to this setting is difficult. In the introduction of your paper, you mention that the constraint set $\mathcal{X}$ is “(potentially) compact“, However, your main results (Theorems 5 and 6) assume that the diameter $D$ of $\mathcal{X}$ is finite (but unknown). Is it possible to extend these results to the unconstrainted case where $\mathcal{X} = \mathbb{R}^d$? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for your positive evaluation of our work. 1. On the stochastic setting: We agree that the stochastic case is important and merits exploration on its own, but we note that the results on the convergence of DoG in the stochastic case are not parameter-free, as they require knowledge of the Lipschitz constant on an unknown set. We have tried doing similar (non parameter-free) theory as in DoG, but unfortunately there is some added difficulty over the analysis of DoG. The main difference is that in DoG, the bound on the Lipschitz constant $G$ is used to apply concentration and obtain that the cumulative noise is small. Because in DoWG we may have no such bound, we cannot apply the same concentration inequality, at least in the smooth setting. We will add a more thorough discussion on this in the paper. 2. Unbounded domains: It is possible to extend DoWG to work for unbounded domains by using the same "taming" trick as in DoG: that is, dividing the stepsize by a logarithmic factor. We have carried out the analysis and everything works. The reason we did not include this algorithm is that it does not perform well in practice. We will include it in the appendix should the paper be accepted. --- Rebuttal Comment 1.1: Title: Thanks Comment: Thank you for your response. I've read the discussion with the other reviewers, and will maintain my original score. In the next revision of the paper, I would recommend that the authors include their results on rates which depend only on $D_0$, as I agree with Reviewer WSp1 that these results are interesting (even if practical performance is not as good as the other step-size).
Summary: This paper introduces a new algorithm for optimization problems. The paper introduces DoWG (Distance over Weighted Gradient) which is a simple extension of a previous work DOG (Distance over gradient). It is a parameter-free gradient optimization method where the step size is automatically adjusted to the function being optimized over the course of the optimization. This works also provides justification of why DoWG produces close to the optimal convergence that can be achieved by Normalized Gradient Descent (NGD) for either Lipschitz or smooth functions. As part of this claim the authors also show a new derivation of the optimal convergence for NGD on smooth functions. Finally the authors show results on training Deep Nets for vision and these demonstrate the while DoWG performs better than other parameter-free methods it doesn't attain the level of accuracy achieved by momentum-based methods such as ADAM. Strengths: ## Defining Universality The authors have very systematically defined their concept of a Universal Gradient Descent-based optimization algorithm. In fact as part of this definition of Universality they have computed the optimal convergence rate of NGD for smooth functions which appears to be a new result although it is not a very surprising result. ## Discussions of NGD The paper provides some very strong intuition about how NGD is self-stabilizing. The discussion and example in the paper provides good insight about how parameter-free methods work in general. Weaknesses: ## Comparison to DOG The proposed algorithm itself is only slightly different than the existing DOG algorithm and the intuition of why it was proposed is not that clear. The paper mentions that DoWG gives higher weight to later gradients, but why is this important? They do also demonstrate on line 263 that DoWG step sizes could be larger than DOG's and this seems reasonable, but overall the motivation for the improvement is not clear. Now the paper does show that DoWG has optimal NGD convergence for Lipschitz functions, but it points out that the same holds for DOG. There is no discussion of whether optimal NGD convergence that is proven for smooth functions with DoWG also holds for DOG or not. If the authors could prove that DOG *doesn't* have optimal NGD convergence for smooth functions then their claim that DoWG is the first parameter-free Universal gradient descent algorithm without a search subroutine would be overall stronger. ## Performance Relative to ADAM The authors should certainly be lauded for including a superior baseline result. However, the fact that ADAM does so much better than DoWG makes their work more of a theoretical curiosity. It appears that the authors hurried through the final evaluation and included very limited results. The previous work that introduced DOG had results showing both fine-tuning as well as training a model from scratch. I would encourage the authors of the current work to do something similar as well as introduce variants such as L-DOWG. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Does DOG have optimal NGD convergence for smooth functions? Have the authors found a reason why this is not the case or why this would be unlikely and could provide a justification? - Have the authors considered L-DOWG (layer-wise DOWG similar to L-DOG)? - The DOG paper (https://arxiv.org/pdf/2302.12022.pdf) in Table 2 page 14 show results for DOG that are superior than ADAM for training a model from scratch. Why could the authors not reproduce those results as a baseline? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you very much for your review. Please find our rebuttal below: 1. On motivation for DoWG: "The paper mentions that DoWG gives higher weight to later gradients, but why is this important?" The practice of assigning higher weights to later gradients is crucial to adaptivity to the loss curvature. For instance, if we start in a highly curved region where the gradients are large, a method that doesn't put a higher weight on the recent gradients would be stuck with small stepsizes for the rest of the training, even if it enters a flat region. This seems to be the reason why Adam works so well: it forgets older gradients at an exponential rate, as the gradient norm estimator in Adam can be written as $\hat{v}_T = \sum_{t=0}^{T} \beta_1^t g_{T-t}$. The same holds for the recently proposed Lion optimizer. The motivation is that as the optimization process continues, we get closer and closer to a minimizer and thus recent gradients give a more accurate direction to the minimizer. 2. On whether DoG can also converge for smooth objectives: at the time of writing the paper, DoG had no convergence guarantee in the smooth setting. In the most recent version of the DoG paper, the authors show that a certain iterate averaging strategy can yield smooth rates for DoG. However, the rate they show is $\mathcal{O} \left ( \frac{L D^2}{T} \left [ \log_+ \frac{D}{r_{\epsilon}} \right ]^2 \right )$ compared to DoWG's $\mathcal{O} \left ( \frac{L D^2}{T} \log_+ \frac{D}{r_{\epsilon}} \right )$, thus DoWG obtains a superior guarantee without changing the averaging strategy. This might be because DoWG takes larger, more aggressive stepsizes than DoG. Thus, while DoWG may not be the only algorithm that achieves universality, it is the best such algorithm in the literature. 3. We have considered L-DoWG, and an additional coordinate-only version of DoWG. We have made these implementations available online, but cannot link them due to anonymity. However, we were already short on space for the submission and thus didn't include them. Should the submission be accepted, we shall use the extra page to add in both algorithms and their motivation. 4. On superiority to Adam: Table 2 in the DoG paper shows that DoG with a polynomial decay averaging strategy performs better than Adam, but vanilla DoG does not. We reproduced a similar experiment in our paper, without the polynomial decay averaging. The results in our paper match those of Table 2 without polynomial decay averaging. --- Rebuttal Comment 1.1: Title: no major change based on response Comment: Thanks for the response. - For 4. It would help to list exactly where in the main text or supplementary material you have results showing better accuracy than Adam for Imagenet or CIFAR. (As shown in the DoG paper with polynomial decay averaging). --- Reply to Comment 1.1.1: Comment: The experiments in our paper do not show superiority of either DoG/DoWG over Adam because we do not use polynomial decay averaging. What we meant to say is that we reproduce that vanilla DoG (without polynomial decay averaging) is worse than Adam.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes a new algorithm, DoWG, which modifies DoG (Ivgi et al., 2023) by utilizing the weighted sum instead of the usual sum for the squared gradients. Convergence is established for convex and convex & smooth deterministic settings, under the assumption of a (possibly unknown) bounded domain. The method is adaptive to the Lipschitz/smoothness constant and is parameter-free in the sense that the bound of the domain may be unknown. Experiments comparing DoWG to several adaptive methods are presented. Additionally, a complementary result for convex & smooth NGD establish that NGD is weakly adaptive to the diameter of the problem. Strengths: 1. The modification from DoG to DoWG is appealing as it results in larger stepsizes, potentially leading to faster convergence. 2. A (limited) set of experiments is provided, demonstrating positive empirical evidence for the effectiveness of the DoWG method. Weaknesses: 3. It is important to highlight that the convergence of DoWG is established exclusively in the deterministic setting, making it weaker than DoG's convergence, which extends to both deterministic and stochastic settings (note that AdaGrad-Norm results also apply to the stochastic setting). The paper should explicitly emphasize this difference and consider exploring DoWG's performance in the stochastic setting. 4. The theoretical results for DoWG hold significance when dealing with unknown domain bounds. When the domain bound is known, AdaGrad-Norm with stepsizes $\eta_t=D/\sqrt{\sum_t \lVert \nabla f(w_t) \rVert^2}$ has tighter bounds for both convex and convex & smooth settings. 5. The term "parameter-free method" typically used for rates that depends on the distance to the comparator or the minimizer. In this context the convergence of DoWG is not truly parameter-free as it is relevant only for bounded domains and does not improve with a good initialization. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 6. Can DoWG achieve similar convergence results to DoG in the stochastic setting? If it is yet established, what is the additional difficulty with respect to DoG? 7. In the deterministic setting, would DoG achieve similar convergence results to DoGW, or was the modification to DoWG necessary to address specific issues in that setting? 8. What is the motivation for the problem of deterministic optimization with unknown bounded domain? Corollary 1 of Ivgi et al. (2023) mentions two-stage stochastic programming as an application, but is the deterministic counterpart of interest as well? Overall, while DoWG shows promise and may yield novel convergence properties, I find the current established guarantees to be partly insufficient compared to previous work on adaptive and parameter-free methods. Edit: Per the authors response regarding the unbounded case with adaptivity to $D_0$ and the improved log factor with respect to the smooth result of DoG (unknown at the time of submission), i raise my score. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your positive evaluation and constructive criticism of our work. We agree that DoWG's main strength is that it allows for much larger stepsizes than DoG both in theory and practice. We now address the weaknesses and questions: 1. On the stochastic case: We agree that the stochastic case is important and merits exploration on its own, but we note that the results on the convergence of DoG in the stochastic case are *not* parameter-free, as they require knowledge of the Lipschitz constant $G$ on an unknown set. We have tried doing similar (non parameter-free) theory as in DoG, but unfortunately there is some added difficulty over the analysis of DoG. The main difference is that in DoG, the bound on $G$ is used to apply concentration and obtain that the cumulative noise is small. Because in DoWG we may have no such bound, we cannot apply the same concentration inequality, at least in the smooth setting. 2. Comparison with AdaGrad-Norm: We note that if the domain bound is known, we can simply plug it into the initialization of DoWG, and DoWG will not change it: This is a consequence of Theorems 5 and 6, where if we put $r_{\epsilon} = D$ (i.e. set the seed initialization to $D$) then $\log_+ \frac{D}{r_{\epsilon}} = \log \frac{D}{r_{\epsilon}} + 1 = \log 1 + 1 = 1$. Therefore, the rate becomes exactly the same as AdaGrad-Norm, with no additional logarithmic factor. 3. Convergence of DoWG for unbounded domains: We note that if we slightly reduce the DoWG stepsize, by using $\eta = \frac{r_t^2}{v_t} \frac{1}{\log_+ \frac{v_t}{v_0}}$, this allows for convergence rates that depend only on $D_0$ rather than the domain bound $D$. The main reason we did not include this variant is that it did not perform well in practice. We shall include this variant and its theory in the final paper, should it be accepted. 4. Comparison with DoG in the smooth setting: In the most recent version of the DoG paper, the authors show that a certain iterate averaging strategy can yield smooth rates for DoG. However, the rate they show is $\mathcal{O} \left ( \frac{L D^2}{T} \left [ \log_+ \frac{D}{r_{\epsilon}} \right ]^2 \right )$ compared to DoWG's $\mathcal{O} \left ( \frac{L D^2}{T} \log_+ \frac{D}{r_{\epsilon}} \right )$, thus DoWG obtains a superior guarantee without changing the averaging strategy. This might be because DoWG takes larger, more aggressive stepsizes than DoG. 5. Motivation for unknown domain: Our main motivation in DoWG is to apply this theory in practice. We note that in practice, a combination of reasonable initialization and bounded updates (enforced through e.g. gradient clipping) lead to a domain bound on the iterate that is unknown. For example, [1] show that neural networks with Gaussian initialization and bounded data norms are semi-smooth and generate a sequence of bounded iterates, with the domain bound being a quite complicated function of the neural network's architecture. While the objective of [1] is non-convex, our theory can instead be applied to convex neural networks with similar properties, e.g. Gated Linear Networks common in data compression algorithms [2]. [1] Zeyuan Allen-Zhu, Yuanzhi Li, Zhao Song. A Convergence Theory for Deep Learning via Over-Parameterization. arXiv:1811.03962. [2] Joel Veness, Tor Lattimore, David Budden, Avishkar Bhoopchand, Christopher Mattern, Agnieszka Grabska-Barwinska, Eren Sezener, Jianan Wang, Peter Toth, Simon Schmitt, Marcus Hutter. Gated Linear Networks. arXiv:1910.01526 --- Rebuttal Comment 1.1: Comment: I thank the authors for their elaborate comment. 1. I strongly suggest to include the result that depends only on $D_0$, including a discussion of the result in the main body. Calling the method "parameter-free" without adaptivity to the distance form the minimizer (comparator in online learning) might be misleading. 2. The same goes for support for unbounded domains. It is fine to have a theoretical method and an "in-practice" small modification of the method. Those two guarantees makes the theoretical setting much less restricted. 3. Did the authors considered a version that use the Lipschitz parameter for the stochastic setting? Will this modification solve the stochastic setting? A common distinction is that a method is parameter-free if it is adaptive to the distance from the minimizer (or comparator, i.e., in parameter-space), and scale-free if no knowledge of the Lipschitz constant is needed (e.g. [1,2]). [1] Orabona, Francesco, and Dávid Pál. "Open problem: Parameter-free and scale-free online algorithms." Conference on Learning Theory. PMLR, 2016. [2] Orabona, Francesco, and Dávid Pál. "Scale-free online learning." Theoretical Computer Science 716 (2018): 50-69. --- Reply to Comment 1.1.1: Comment: Thank you for your quick response. 1. (On proof with $D_0$ and unbounded domains). We shall include the result on the variant of DoWG with dependence on the initial distance $D_0$ in the final paper. The result is not very difficult to derive, and we include the main lemma here for completeness. The main idea is that by dividing the stepsize by a running logarithmic factor, we can prove the iterates stay bounded by a constant multiplied by the initial distance $d_0$. Then we can just apply the ordinary DoWG proof (at the cost of an additional logarithmic factor only). The proof follows follows [1]. Lemma 1 (Stability). Suppose that $r_0 \leq d_0$. Then the iterates of DoWG with stepsizes $\eta_t = \frac{\bar{r}_t^2}{2 \sqrt{v_t}} \frac{1}{\log \frac{2v_t}{v_0}}$ satisfy $\bar{d}_t \leq 16 d_0$ and $\bar{r}_t \leq 16 d_0$ for all $t$. Proof. We have by convexity $$ d_{k+1}^2 - d_k^2 \leq \eta_k^2 |{g_k}|^2 $$ Summing up from $k=1$ to $k=t$ we get by [1, Lemma 6] $$ d_t^2 - d_1^2 \leq \sum_{k=1}^{t} \eta_k^2 |{g_k}|^2 = \sum_{k=1}^{t} \frac{ {\bar{r}\_k}^4 }{v_{k-1}} \frac{|{g_k}|^2}{4 \log^2 \left( \frac{2v_{k-1}}{v_0} \right)} \leq \frac{\bar{r}\_t^2}{4} \sum{k=1}^{t-1} \frac{v_k - v_{k-1}}{v_{k-1} \log_+^2 \left( \frac{v_{k-1}}{v_0} \right)} \leq \frac{1}{4} \bar{r}\_t^2 $$ Thus we have $d_{t+1}^2 \le d_1^2 + \frac{\bar{r}t^2}{4}$. Now suppose in the way of induction that $\bar{r}\_t^2 \leq 8 d_1^2$, then applying the last equation we get $d\_{t+1}^2 \leq d_1^2 + \frac{8}{4} d_1^2 = 3 d_1^2$. Taking square roots gives $d{t+1} \leq \sqrt{3} d_1$. Subsequently, by the triangle inequality we have $$ |{x_{t+1} - x_1}| \leq |{x_{t+1} - x_{}}| + |{x_{} - x_1}| \leq \left[ \sqrt{3} + 1 \right] d_1 $$ Squaring both sides gives $|{x_{t+1} - x_1}|^2 \leq \left( \sqrt{3}+1 \right)^2 d_1^2 \leq 8 d_1^2$. It follows that $\bar{r}\_{t+1}^2 \leq 8 d_1^2$, and the induction thus gives us $\bar{r}_t^2 \leq 8 d_1^2$ for all $t$. Next, observe that $$ d_1 = |{x_1 - x_{}}| \leq |{x_0 - x_1}| + |{x_0 - x_{}}| = r_0 + d_0 \leq 2 d_0 $$ It follows that $\bar{r}\_t \leq 16 d_0$ for all $t$. Finally, observe that the same analysis implies $d_t \leq 16 d_0$ for all $t$. Therefore, we get that the iterates stay in a bounded domain, and with a small modification of the main proof, we get the same result as the original paper, with $D$ replaced by $16 D_0$ and an additional log factor. We shall include the result in full in the final paper. 2. If we do have knowledge of the Lipschitz parameter (or the Lipschitz smoothness parameter in the smooth case) then we can do the stochastic case. We did not consider this for long because the algorithm is then no-longer parameter free. In the online learning setting, it is not possible to do away with knowledge of the Lipschitz parameter in the worse case, as the lower bound of Cutkosky and Boahen (2016) shows. This may not be the case for offline stochastic convex optimization. References: [1] Ivgi et al., DoG is SGD's best friend, 2023.
Summary: The paper proposes a new optimization algorithm, DOWG, that does not rely on additional hyperparameter tuning or a line search subroutine. Interestingly, the paper also contains a proof and analysis regarding the behavior of NGD and shows that (1) NGD adapts to the smoothness of continuous loss surfaces and (2) NGD operates at the edge of the maximum stable learning rate for continuous loss functions. Finally, the paper presents two empirical studies on CIFAR10 with two different neural networks. Strengths: I am not an expert in optimization, but I thoroughly enjoyed reading this paper. The writing is clearly organized and accessible to readers with only some rudimentary knowledge of optimization. Regarding related work, the paper carefully assigns credit to existing work and precisely describes how the current work stands out at each step along the way, which demonstrates a high level of expertise in the field. The theoretical contribution seems solid, and the new algorithm seems to be well motivated and work well, when compared with other parameter-free algorithms. I strongly recommend this paper be accepted. Weaknesses: DOWG does not work as well as Adam with cosine annealing on both neural network problems. However, this is not very surprising because cosine annealing is extremely strong. I do not consider this to be a fatal issue. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Nil Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Nil Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for your very positive evaluation of our work. We agree that Adam with cosine annealing is an incredibly strong baseline, and we hope that future work can reveal a principled way of improving over that. --- Rebuttal Comment 1.1: Comment: Thanks. I've read the response and have nothing further to add.
null
null
null
null
Group Robust Classification Without Any Group Information
Accept (poster)
Summary: The paper identifies that current methods tackling spurious correlations requires group annotation in either the training or validation stage. To address this limitation, the authors propose uLA, a bias-unsupervised method that achieves superior empirical performance without any group annotation. Strengths: The paper studies an important problem: how to tackle spurious correlations when group annotation is unavailable. The proposed algorithm is simple yet effective, demonstrating superior empirical improvement. The performance on the systematic generalization is particularly promising as it is a much more realistic scenario. Weaknesses: Although the authors show some conceptual and empirical advantage of SSL over pure supervised training, it remains unclear how crucial a role SSL has in the training pipeline. I hope the authors can provide clearer insights about the criterion of selecting the pretraining scheme for the shared backbone model. (Also see question 3) The robustness of the algorithm to different hyperparameter settings is questionable (also see question 2). Not only does the biased prediction matter, but the biased logits are also quite important in the training pipeline. However, the logits can take dramatically different values given different hyperparameters. I hope the authors can provide more insights in this regard and perhaps include more principled method for hyperparameter search. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. For the reported baseline, how are the models trained? Do they also go through the MOCOV2+ pretraining and linear probing as uLA? 2. On line 225, it is mentioned that the model needs to be validated at every epoch, which is uncommon in practice. Could you provide some justification for this procedure? Does it mean that the algorithm is sensitive to hyperparameters? 3. Why use an Imagenet pretrained model on waterbird? Are there any scenarios where using SSL hurt the performance? I hope the authors can provide more detail about this design choice and more discussion about the selection of the training scheme of the shared backbone. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: The authors have adequately addressed the limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed assessment of our work and for highlighting the merits of our approach, as well as the importance of the problem. We address all concerns below: ### Weakness 1/Question 3.b.: Even though SSL is demonstrated to provide benefits over pure supervised learning, it is not clear how the choice of SSL pretraining affects debiased training. In **Table 1 of the rebuttal material**, we demonstrate results of uLA on cCIFAR10 (1%) with BYOL [1] and BarlowTwins [2] SSL pretraining. In both of these extra cases, uLA outperforms ERM and the SotA bias-supervised baseline. Performance with BYOL is comparable to MoCoV2+ (we have not performed extensive hyperparameter search), while performance with Barlow Twins is inferior. The latter could possibly be explained by the form of the Barlow Twins objective which attempts to match an estimation of cross-covariance between features using (spurious) training statistics to the identity. [1] Grill, Jean-Bastien, et al. "Bootstrap your own latent-a new approach to self-supervised learning." NeurIPS. 2020. [2] Zbontar, Jure, et al. "Barlow twins: Self-supervised learning via redundancy reduction." ICML. 2021. ### Weakness 2.a: Robustness to hyperparameters Beyond the typical hyperparameters in stochastic optimization of deep neural networks (learning rate, batch size, architecture, augmentation choices and others), we have perceived performance benefits when we search over the logit adjustment strength $\eta$, biased model calibration temperature $\tau$ and the number of epochs for training the linear head of the bias proxy network $T_\mathrm{stop}$. As we present in **Table 7 of Appendix E**, we have found stable search spaces for $\tau$ (values around 1.0, with default value 1.0) and $\eta$ (values larger than or equal to 1.0 with default value 1.0) across benchmarks. Furthermore, for $T_\mathrm{stop}$ we find in **left plot of Figure 2 in the rebuttal material** that the group-balanced test accuracy of a debiased classifier trained with uLA on cCIFAR10 (1%) increases given sufficient training time of the bias network and then it stays within the noise level of optimal performance. ### Weakness 2.b: Request for more principled method for hyperparameter search We want to highlight that one of the core contributions of our work is a meticulous methodology for model selection and hyperparameter search for robust/ood classification problems. We have demonstrated the effectiveness of our approach via experimenting against known baselines in the literature, often outperforming alternatives which use privileged group annotation information during hyperparameter search (Table 1,2,3). Extensive ablation experiments show that our hyperparameter search methodology is more reliable than a naive alternative validation objective (Figure 4). The choice of hyperparameter search algorithm can be any black-box optimizer, however in this work we opt for random search, which is the most simple, general and yet more effective than grid search [4]. [4] Bergstra, James, and Yoshua Bengio. "Random search for hyper-parameter optimization." JMLR 13.2 (2012). ### Question 1: Are baselines pretrained with MoCoV2+ too? No, they do not as the methods described in their respective papers do not utilize any form of pretraining for the debiased network. We compare against accuracies reported by the peer-reviewed published methods in standard benchmarks. ### Question 2: On validating the model periodically during training and model selection If we understand the reviewer correctly, there is a concern about the practice of periodic validation of a model during its training. We want to note that this is a well-documented practice, commonly referred to as ``early stopping’’. From the Deep Learning book [8, Chapter 7]: "The only significant cost to choosing this hyperparameter automatically via early stopping is running the validation set evaluation periodically during training", referring to how to choose the number of training iterations with early stopping. While in many standardized benchmarks the number of epochs has been already searched for, in ood generalization literature it is often [9] a sensitive factor and early stopping can bring benefits in generalization. Our work addresses the need for bias-unsupervised validation for robust model selection using iid data resources. In the right plot of Figure 2 in the rebuttal material, we demonstrate that our validation accuracy [described in Equation 7] closely follows the group-balanced test accuracy and model selection according to it is likely to be close to the maximum test accuracy. In contrast, if a practitioner performs model selection by maximizing the (average) iid validation accuracy, they will get much worse performance. [8] Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016. [9] Gulrajani, Ishaan, and David Lopez-Paz. "In search of lost domain generalization." ICLR. 2021. ### Question 3.a: Why did we use an ImageNet pretrained model on Waterbirds? How would SSL affect performance? For Waterbirds, we abide by the practice followed by the baselines in literature [see references at Table 2] for fair comparison. We will highlight this in the main text. All methods initialize a ResNet50 from PyTorch ImageNet-pretrained weights. A more complete explanation is that Waterbirds’ training set contains only 4795 samples, which are too few for successful supervised or SSL training from scratch. Instead, literature has chosen to test the ability of robustness algorithms in the finetuning setting. We are pleased to find that our method is not hindered by supervised pretraining in this case. --- Rebuttal Comment 1.1: Comment: I want to thank the authors for the detailed response, which addresses most of my concern. I will keep the initial positive score. --- Reply to Comment 1.1.1: Title: Thank you for your response Comment: Thank you for answering to our rebuttal and for the positive assessment of our work. To further understand the weaker and the stronger points of our paper, as well as to facilitate the decision process, we would like to know which of the concerns were addressed adequately and which were not. While it is not necessary, we would appreciate if the reviewer could specify them.
Summary: This work aims to evaluate and introduce a method to make classifiers perform well across subgroups of the data, focusing in particular on “spurious correlations” where one attribute correlates with the output label in the training set, but need not at test time. While most past work doing this requires “bias annotations”, or labels of the potential spurious attribute, some work aims to avoid spurious correlations without group info. This work reveals shortcomings in these methods; they introduce a new dataset (sMPI3D) where the goal is to classify the shape of an object, which is spuriously correlated with its color. They then introduce a new method, “bias-unsupervised logits”, which adjusts the output logits approximating the log probability of the label given the hidden attribute of x, without any access to the bias attribute. To approximate this quantity, the authors take representations trained with some self-supervised learning approach (they use MoCoV2+), then fine-tune these representations towards the target labels and use this as the prediction. The authors find that their method is comparable to existing methods that use bias annotations on Waterbirds, CelebA, cMNIST, and cCIFAR, and find that their method is the only one that reliably outperforms empirical risk minimization on sMPI3D. Strengths: * The paper is well written, motivated, and referenced throughout. * The dataset sMPI3D seems like a good contribution; in particular, the authors do a good job explaining how the number of subgroups grows exponentially with the number of attributes you care about, that this means you can’t cover all groups in the training set reliably, and this dataset gives a great simple way to test how well models extrapolate to new groups (when the individual attributes are supported, but the direct product is not) * The authors setup---no bias annotations during both training and validation---seems realistic and important for subsequent work to follow * The results against other methods on existing benchmarks seem promising, though they’re sometimes worse than other methods. Weaknesses: * There are places where the paper could use more exposition; for example, the current paper does not devote much time to how they extract bias variable observations, or time to how they construct the dataset (the two core contributions) * The method, while performing comparably to baselines, only offers significant improvement over previous methods on the sMPI3D, which the authors release (and thus have a lot more control over). This is somewhat ok because the task setup for sMPI3D is different (support over all individual attributes but not the direct product in training), but it would be nice to see additional experiments. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: * Could you provide more intuition for why training on the actual labels is a good way to estimate p(y \mid z); in particular, what properties of the pretrained representation do you need for this to work? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Addressed limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their careful assessment of our paper. We address the raised concerns in what follows: ### Weakness 1: There are places where the paper could use more exposition: how they extract bias variable observations, or time to how they construct the dataset (the two core contributions) Thank you for pointing this out. We estimate bias attributes via a biased classifier. That is, we train a classifier with ERM without any additional approach to avoid spurious solutions that rely on biasing attributes. Since that approach is vulnerable to spurious correlations, predictions will correlate strongly with spurious features if those exist. To create the sMPI3D dataset, we use the shape of the object as the target variable, while its color is the bias. Both generative attributes can assume 6 values so that 36 combinations are possible. A training set is created by assigning a number $C$ of colors per shape. We also make sure that all colors and shapes are represented uniformly. The test split is created by covering the shape/color combinations that **are not** observed in the training set. In our evaluation, we consider different cases with $C \in \{2, 3, 4, 5\}$. Please refer to appendices A, C and D.1 for a more in-depth exposition of both the training procedure and the construction of sMPI3D. We finally highlight that, in case of acceptance, the additional page in the camera ready version will be used for more details on the data construction along with further discussion on the effectiveness of the bias estimation approach as evidenced by Figure 1 of the rebuttal material. ### Weakness 2: Significant improvement over previous methods only on sMPI3D We would like to highlight that uLA outperforms all other methods in Table 1 in 5 out of 8 setups. Please note that in our assessment we compare against methods which may require group information during training and/or validation in order to perform well. Our method, however, outperforms them without **any access** to privileged information sources. In particular, for the cCIFAR10-1% case we outperform the state-of-the-art approach, GroupDRO (which uses group information during training and validation), with a margin of about 24% absolute accuracy. Even though these results demonstrate the effectiveness and potential of our proposal across a diverse range of scenarios, we believe our main contribution to be the improvements demonstrated in the systematic generalization case. In fact, it is fair to claim that a considerable chunk of the existing literature on robustness to spurious correlations and group fairness is expected to fail in such a case, as confirmed by our empirical assessment. We hope our contribution motivates further work in this direction given its practical relevance. ### Question 1: Could you provide more intuition for why training on the actual labels is a good way to estimate p(y \mid z); in particular, what properties of the pretrained representation do you need for this to work We simply aim for $p(y|z)$ to be such that a given class will be consistently estimated whenever a spurious feature is observed. Luckily, training a model with ERM typically satisfies that since learners tend to focus on *easy* features. Please refer to Figure 1 on the additional experiments for empirical evidence of that being the case for the cCIFAR-10 dataset. In further detail, we want to leverage the empirical observation from [1] that a biased model will predict the actual labels using the spuriously correlated (bias) variable, rather than the true feature corresponding to the class label. Using the case of cMNIST for instance, simpler predictive features (like color) are generally learnt faster than more complex predictive features (like the shape of digits). Typically training of deep neural networks will tend to use the bias attribute, when it is a simpler feature which is spuriously predictive of the target variable [2,3]. For example, it could be the case that during training most images that display the digit 5 are green. Predicting “digit 5” whenever the green feature is present in an image is sufficient to achieve minimal training errors. In **Figure 1 of the rebuttal material** we display heatmaps comparing the soft confusion matrix computed using Equation 5 versus the distribution of bias and target attributes in the training set for a cCIFAR10 (1%) experiment. We observe that the soft confusion matrix closely matches the joint distribution between the target and the bias variable. [1] Nam, Junhyun, et al. "Learning from failure: De-biasing classifier from biased classifier." NeurIPS. 2020. [2] Geirhos, Robert, et al. "ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness." ICLR. 2019. [3] Rahaman, Nasim, et al. "On the spectral bias of neural networks." ICML. 2019.
Summary: The paper works on mitigating the impacts of spurious correlations during the risk minimization. The authors firstly introduce a systematic generalization task and illustrates existing methods implicitly made the assumption that all group combinations are represented within the training procedure. And then reveal that the importance of bias labels in model selection. Hence, the authors leverage pre-trained self-supervised models for bias information extraction, during the training and validating the debiased models. Strengths: The proposed method uLA trains a linear classifier on top of an SSL pre-trained model. The predictions are hence biased, and could be made use of model debiasing. The key advantage is that uLA does not need any group labels, neither in the training nor in the validation. Weaknesses: * (1) The presentation could be better , i.e., (a) in figure 2, it would be much better if authors could give more introduction about MPI3D. datasets; (b) Eqn (5) is a core component of uLA, it would be better if authors could explain more about the the insights of the estimate for $\hat{p}_{\text{data}}(Y, Z)$, for example, why this is a reliable estimate? Is it possible for authors to provide some empirical verification on the effectiveness of this estimation? * (2) Personally speaking, the proposed method is somehow related to a family of distributional robustness approach, that holds the viewpoint that "fine-tune last layer" [R1] or "post-hoc adjust the model prediction" [R2] can improve the distributional robustness substantially. Such methods also rely on a well-trained model, uLA can also be categorized in this setting since uLA leverages SSL pre-trained models. Thus, it would be better to either compare or discuss the connections between uLA and this family of robust solutions. * (3) Efficiency of uLA: SSL algorithms usually require longer training time and more computer resources, compared with classical training algorithms. References: [R1] Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations. [ICLR'23] [R2] Distributionally Robust Post-hoc Classifiers under Prior Shifts. [ICLR'23] Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * (1) The choice of ssl algorithm: I am not asking authors to run additional experiments, just curious about how uLA behaves when different SSL algorithms are utilized, if the authors already have the result. * (2) Analysis of $\eta$: are there any analysis on the role of $\eta$ in eqn. (6), either theoretical or empirical analysis? * (3) How is the performance of uLA on waterbirds when a SSL pre-trained model is utilized? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 4 excellent Limitations: As mentioned in weakness 3, one limitation is **Efficiency of uLA:** SSL algorithms usually require longer training time and more computer resources, compared with classical training algorithms. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed feedback and their questions. ### Weakness 1.a: Presentation of MPI3D We update line 73 with: “In particular, we use the ‘real’ split of MPI3D which consists of photographs of a robotic arm that has a colored rigid object attached to its end effector. The images are captured in a way that controls their generative attributes, such as the shape of the rigid object or the position of the robotic arm.” In addition, we append the following at line 77: “In Appendix D.1 we describe in more detail the construction of the sMPI3D task, while in Figure 2d we illustrate an example of a systematic split.”. ### Weakness 1.b: Is Equation 5 a reliable estimate and when? We want to leverage the empirical observation from [1] that a biased model will predict the actual labels using the spuriously correlated (bias) variable, rather than the true feature corresponding to the class label. Using the case of cMNIST for instance, simpler predictive features (like color) are generally learnt faster than more complex predictive features (like the shape of digits). Typically training of deep neural networks will tend to use the bias attribute, when it is a simpler feature which is spuriously predictive of the target variable [2,3]. For example, it could be the case that during training most images that display the digit 5 are green. Predicting “digit 5” whenever the green feature is present in an image is sufficient to achieve minimal training errors. Please refer to Figure 1 of the rebuttal material where we display heatmaps comparing the soft confusion matrix computed using Equation 5 versus the distribution of bias and target attributes in the training set for cCIFAR10 (1%). We observe that the soft confusion matrix closely matches the joint distribution between the target and the bias variable. [1] Nam, Junhyun, et al. "Learning from failure: De-biasing classifier from biased classifier." NeurIPS. 2020. [2] Geirhos, Robert, et al. "ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness." ICLR. 2019. [3] Rahaman, Nasim, et al. "On the spectral bias of neural networks." ICML. 2019. ### Weakness 2: Related work Thanks for the pointers. Note that we already acknowledge R1 ([37]) as a bias-supervised method in L177. We update the related work with R2 as an example of bias-supervised post-hoc method. Note that R2 differs from our proposal in that it seeks to adjust logits to align a classifier's outputs with an empirical label marginal distribution given by an **annotated** validation sample. In our case, we focus on situations where such validation data are not available, and uniform accuracy over groups is expected nonetheless. More importantly, the approach in R2 would not cover cases with systematic splits such as sMPI3D since validation data, even if annotated, would still not cover all possible groups if they are indeed i.i.d. to the training set. ### Weakness 3: Efficiency of uLA We understand the concern about training an SSL model. However, we are glad to highlight that this model is re-used to initialize the de-biased network and thus, we only need to train a single model (L179). Furthermore, training a linear probe on top of extracted features or finetuning a pretrained network typically require fewer parameter updates than training from scratch. ### Question 1: How does uLA behave when different SSL pretraining methods are used? In **Table 1 of the rebuttal material**, we demonstrate results of uLA on cCIFAR10 (1%) with BYOL [3] and BarlowTwins [4] SSL pretraining. In both of these extra cases, uLA outperforms ERM and the SotA bias-supervised baseline. Performance with BYOL is comparable to MoCoV2+ (we have not performed extensive hyperparameter search), while performance with Barlow Twins is inferior. The latter could possibly be explained by the form of the Barlow Twins objective which attempts to match an estimation of cross-covariance between features using (spurious) training statistics to the identity. [3] Grill, Jean-Bastien, et al. "Bootstrap your own latent-a new approach to self-supervised learning." NeurIPS. 2020. [4] Zbontar, Jure, et al. "Barlow twins: Self-supervised learning via redundancy reduction." ICML. 2021. ### Question 2: Analysis of Equation 6. Proposition 2.1 (L156 and Appendix A) provides a theoretical justification of the objective in Equation 6 with $\eta = 1$. The concluding result is that by this procedure the optimal model maximizes a lower bound to the group-balanced accuracy, hence the procedure behaves in a desired way. **For why $\eta$ should be tuned**, please refer to suggestions by [5] in the last two paragraphs of Section 4.1. What they refer as $\tau$ is $\eta$ in our case. In short, calibration errors of deep neural networks [6] need to be minimal (in this case of the debiased network) and tuning this helps in mitigating them. We will clarify its role in the main paper. [5] Menon, Aditya Krishna, et al. "Long-tail learning via logit adjustment." ICLR. 2021. [6] Guo, Chuan, et al. "On calibration of modern neural networks." ICML. 2017. ### Question 3: Why did we use an ImageNet pretrained model on Waterbirds? How would SSL affect performance? For Waterbirds, we abide by the practice followed by the baselines in literature [see references at Table 2] for fair comparison. We will highlight this in the main text. All methods initialize a ResNet50 from PyTorch ImageNet-pretrained weights. A more complete explanation is that Waterbirds’ training set contains only 4795 samples, which are too few for successful supervised or SSL training from scratch. Instead, literature has chosen to test the ability of robustness algorithms in the finetuning setting. We are pleased to find that our method is not hindered by supervised pretraining in this case, as long as we freeze the weights of the backbone when we are training the bias proxy network. --- Rebuttal Comment 1.1: Comment: Thanks authors for the detailed responses, which have addressed most of my concerns. After reading the author's rebuttal and the other reviewers' comments, I would like to keep my score (6: Weak Accept).
Summary: This paper introduces a new bias-unsupervised method for addressing spurious correlations, encompassing a debiasing training algorithm and a model selection paradigm. Specifically, the method employs a Self-Supervised Learning (SSL) pre-trained model as a bias proxy. The SSL model's fixed predictions are used to adjust the logit during the predictor's training phase. These predictions are also utilized in the model selection stage to adjust the weight of the sample, aiming to approximate group-balanced accuracy. Empirical evidence supports the method's efficacy, as demonstrated on several popular benchmarks, as well as a systematic generalization benchmark. Strengths: - How to design a proper bias proxy is important for the bias-unsupervised methods. - The presentation of the method is clear and generally easy to follow. - A variety of benchmarks are considered in the experiments and an ablation study part is included. Weaknesses: - Inconsistency. In the introduction part, one of the motivation of this work is to solve the failure of existing methods in systematic generalization. However, though empirically shown, it is not clearly introduced how the proposed method could achieve that improvement. - Some important related works are missing. - There are some other existing bias-unsupervised methods, including EIIL[1], PGI[2], and SCILL[3]. Similar to the method proposed in this work, all the 3 methods utilize a frozen reference model as a bias proxy. Especially, PGI is proposed for the systematic generalization task. - The logit adjustment strategy is similar to the strategies in some debiasing training methods, e.g. PoE[4] and DRiFt[5]. - A bias-unsupervised validation paradigm is also proposed in [2]. Therefore, this paper should discuss these related works and reposition its contribution. References: [1] Creager et al. "Environment inference for invariant learning." ICML, 2021. [2] Ahmed et al. "Systematic generalisation with group invariant predictions." ICML, 2021. [3] Chen et al. "When does group invariant learning survive spurious correlations?" NeurIPS, 2022. [4] Clark et al. "Don’t take the easy way out: Ensemble based methods for avoiding known dataset biases." EMNLP, 2019. [5] He He, Sheng Zha, and Haohan Wang. Unlearn dataset bias in natural language inference by fitting the residual. DeepLo workshop, 2019. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Why the proposed method can benefit systematic generalization, theoretically or intuitively? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Some theoretical justifications for the proposed method are lacking. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback, recognition of the novelty of our approach and positive remarks on the presentation of our method. ### Weakness 1/Question 1: It is not clearly introduced how the proposed method could achieve improvement in systematic generalization, intuitively or theoretically. **Theoretically**, Proposition 2.1 (L156 and Appendix A) justifies why we expect models trained with logit adjustment to systematically generalize: it optimizes for the transfer learning problem defined by Eqs 1 and 2. At the same time, systematic generalization is just an extreme case of this transfer learning problem, for which some combinations of features have 0 probability in training+validation. The scope of our analysis considers anti-causal prediction tasks for which the target variable is a generative factor. This is relevant in practice as Fig 2 suggests. We will add a sentence which clarifies the scope in Sec 2. **Intuitively** (L63), many of the existing methods will fail in a systematic generalization task, as *there are no rare samples to reweight their loss contributions or to upsample during training*. In contrast to these methods, logit adjustment avoids this pitfall. It adds the spurious $\log p(y|z)$ to the unnormalized logits of the mode, and downweights the score of the class y that is unlikely given the observed z. That is, as explained in L147-152, this additive operation will mask out the logits that are improbable to prevail under the structure of spurious correlations of the training set. ### Weakness 2: Discussion on important related works and request for repositioning our contributions. Thank you for the important references. All of them will be mentioned in Sec 5 about Related Work. In particular, [1,2,3] will also be discussed in a dedicated related work appendix section about utilizing reference models as proxies for bias information. **[1]** focus on extracting majority and minority group annotations based on a reference model, which ideally predicts based on spurious features only. In particular, they optimize for group assignments which maximally violate a relaxation of the Environment Invariance Criterion. **[3]** carefully examines various data generation processes and they propose an alternative to [1] for bias-unsupervised group design. More concretely, [3] seeks to establish conditional independence between the predictions of the bias proxy model and the target variable given the inferred groups. Our work follows more closely the approach of [6], in which a biased network is simply trained with ERM. By utilizing a frozen backbone pretrained with SSL, our approach improves on the sensitivity of [6] to the number of training steps for the bias proxy. As we demonstrate at the **left plot of Fig 2 in the rebuttal material**, the test accuracy of a uLA debiased model increases given sufficient training time of the bias proxy and after that it stays within the noise level of optimal performance. **[2]** introduce experimental settings which highlight difficulties of various training methods. They use [1] to pretrain a bias proxy and they debias with an alternative target-conditioned invariance criterion. **The authors of [2] claim that one of their colored-MNIST settings corresponds to a systematic generalization task. However we find that this is not accurate**; systematicity can only be probed for when entirely novel attribute combinations are used to generate the test samples. In their case, however, the training set contains 80% of bias-aligned samples and 20% of bias-conflicting, exposing the model to all possible combinations. In contrast, in our sMPI3D, there are always some (target, bias) combinations which are not represented in the training set at all (Fig 2d). Finally, [2] present evaluations using different model selection strategies: they use accuracy under an iid validation set, an extra ood validation set and a mixed case. However, unlike our work, they do not propose or evaluate under a specialized robust model selection methodology using bias-unsupervised iid data. ***Thus, we want to underline that to our knowledge we are the first to examine systematic generalization classifiers trained with group-robustness methods.*** Furthermore, the authors of [3] perform experiments using a methodology dubbed as TEV (Training Environments Validation), which is briefly described in Appendix D.1. Iit seems related to our approach as it validates models based on inferred groups from training. The paper cites [7] as a source for TEV, however we are not able to find a methodology which matches the provided brief description in [7], making a detailed comparison and reproducibility difficult. On the contrary, we have shown that our approach often outperforms even alternatives which use group labels during model selection (Table 1,2,3). Also, an extensive search experiment shows that our selection criterion is more reliable than an alternative version of our validation score based purely on supervised learning (Fig 4). ***Overall, we believe that our work addresses explicitly an important gap in group-robustness literature [7] and proposes an effective bias-unsupervised iid robust validation protocol.*** Finally, **[4]** employs a logit adjustment approach to debiasing VQA models by incorporating a bias proxy which is trained exclusively on question data. On the other hand, **[5]** exemplifies the same intuition as logit adjustment in that a second network is trained on examples that cannot be predicted already using the spurious structure. Our work instead utilizes a simple and generic logit adjustment loss, which is derived from first principles as we show in Proposition 2.1 and Appendix A. [6] Liu, Evan Z., et al. "Just train twice: Improving group robustness without training group information." ICML. 2021. [7] Ishaan Gulrajani and David Lopez-Paz. “In search of lost domain generalization.” ICLR. 2021. --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: I appreciate the authors' efforts in addressing my concerns. However, some issues remain unresolved. Firstly, I'd like to highlight that a test set from the colored MNIST dataset as mentioned in [2] indeed exhibits systematic shifts, as evident from Figure 1 in [2]. In the test set $T_s$, the authors "colour the test set with the biasing colours, but such that no digit is coloured with its own biasing colour". This implies entirely novel combinations. Therefore, I contest the notion that [2] examines the systematic generalization of classifiers trained with group-robustness methods. Secondly, while the rebuttal acknowledges the logit adjustment methods proposed in [4] and [5], the distinctiveness of the logit adjustment method introduced in this paper remains ambiguous. Specifically, how is the new method "simpler and more generic"? Does it operate under less stringent assumptions? I look forward to clarifications for the above issues to better appreciate the authors' contributions. --- Reply to Comment 1.1.1: Title: Thank you for the response Comment: We thank the reviewer for participating in the discussion phase. We proceed to address the remaining issues: ### Systematic Generalization in [2] Considering that we agree on that a systematic generalization benchmark needs to exhibit entirely novel combinations of attributes in the test set samples, we disagree that [2] examines such a case. This is already evident in Fig. 1. Specifically, Fig 1.a presents examples from the training set. Last row contains samples of digit 9, while it suggests that the biasing/dominant color for 9 is fuschia. Notice how a cyan 9 (which is not the dominant color) also appears in the training set. At the same time Fig 9.c presents samples from the systematic generalization test set. Observe that it also contains a cyan 9 in the last row. This is not a coincidence. The cited line from the paper in the comment just means that the test set contains only bias-conflicting samples; that is samples which are generated by combinations of (digit, not biasing color of that digit), like non-fuschia 9s. It does not claim, however, that such combinations are not exposed in the training set. In fact, "own biasing color" can just mean "highly correlated with the digit in the training set", as it is the case in [2] and for example in [8]. To underline this, we repeat here the training set generation procedure as it is presented in [2], and also mentioned in our rebuttal: > COLOURED MNIST: Consider an illustrative dataset with coloured MNIST digits. For the training set, Tr, MNIST digits are coloured with a set of digit-correlated “biasing” colours 80% of the time, and with ten random colours that are different from the biasing colours the remaining 20% of the time. We hope that this makes clear that **the training set of [2] is exposed to all possible combinations of digits and colors**, and hence under their training set construction there can be no possible test set containing samples from entirely novel combinations. Thus, in contrast to our work (see Fig 2.d and Appendix D.1 of our work), [2] does not actually examine a systematic generalization task, but an easier group-robustness case as it is further elaborated in our rebuttal. [8] Nam, Junhyun, et al. "Learning from failure: De-biasing classifier from biased classifier." NeurIPS. 2020. ### In relation to [4] and [5] [4] operates under the additional assumption that input data is available in pairs of (true context, bias source). In their case (VQA tasks) true context refers to an image and the biasing source is the textual question. In our work, true predictive features and bias features are not already "disentangled" like that in an input observation $x$. In that, our method is *simpler and more generic*. On the other hand, [5] does not have a mechanism which prevents the biased model from fitting exactly the biased training set. If the biased networks fits entirely the training set, consequent logit adjustment will not be effective as it no longer a proxy for $p(y|z)$, but on the training samples it is simply acts as look-up table for the training set. To prevent that, [5] depends on early-stopping hyperparameters for the biased network, which are tuned using unspecified data resources, possibly a balanced validation set. In contrast, our work tunes hyperparameters based on our proposed robust model selection criterion using iid data, as well as our pretraining approach based on SSL and linear probing yields decreased sensitivity to the number of training steps of the biased network (Left plot Fig. 2 of rebuttal material).
Rebuttal 1: Rebuttal: We thank all reviewers for their insightful assessment of our work and for providing useful feedback and actionable suggestions. We are glad they found the method we proposed to be simple yet effective (reviewer Tfdv) and efficient since no bias labels are required during training or validation (reviewer knxG), our evaluation broad covering a number of benchmarks and including ablations (reviewer Y39n) and realistic (reviewer R4tq), the sMPI3D data to be a good contribution (reviewer R4tq), and the manuscript to be clear (reviewers Y39n and R4tq). We addressed each reviewer individually and provided additional empirical analysis as suggested. New results include: - Evaluations of models using SSL backbones pre-trained with different approaches. - Analysis of the effectiveness of using ERM trained classifiers as estimators of bias attributes. - A comparison of different cross-validation criteria show how i.i.d. accuracy yields to sub-optimality and that our proposal closely aligns with the oracle group-balanced test accuracy, as suggested by our original results. - Plot on how bias proxy training time influences group-balanced test accuracy of a debiased model. Furthermore, we summarize the edits that we suggest to perform in the paper as a result of reviewer feedback: - Clarification on the scope and relevance to systematic generalization for Proposition 2.1 - Updated related work and extra Appendix section which discusses methods to create bias proxy networks - More main paper details on MPI3D and sMPI3D - Extra study in main paper: evidence about Eqn 5 being reliable (Fig 1 of rebuttal material) - Clarification that ImageNet pre-training for Waterbirds is for fair comparison to baselines - Clarification on why we search $\eta$ - Add in Appendix the ablation for different SSL pretraining methods (Table 1 of rebuttal material) - Add in Appendix the sensitivity curve: bias proxy training time vs test accuracy of debiased model (Left plot of Fig 2 in rebuttal material) - Add in Appendix the validation and test curves during training (Right plot of Fig 2 in rebuttal material) Pdf: /pdf/3f2533c41a7a545cb6df0f2e0df5e9561b33841f.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
CoLLAT: On Adding Fine-grained Audio Understanding to Language Models using Token-Level Locked-Language Tuning
Accept (poster)
Summary: The authors propose a novel ways to train audio-text model by adding a token interaction module followed by the same frozen text encoder from the text modality after audio encoder to learn audio representation. And they show that proposed approach yield state-of-the-art results in couple audio tagging and classification tasks, cross-modal retrieval, and qualitative analysis on audio guided image generation. Strengths: - The proposed approach of appending token interaction module followed by text encoder is a very interesting and novel idea to leverage language model. - Downstream evaluation are provided with both zero-shot and linear probe settings, this provides more thorough understanding of the behavior for proposed systems. Weaknesses: - Proposed architecture is really interesting, however, the training with AudioSet labels in constructing prompts as described in section 4.1 line 248-250 seem arbitrary. The claim that "such templates have been shown to be more effective than simple concatenation" needs more in depth evidence or reference. Also given the works in increasing audio captioning data and applied to audio-text model pre-training (e.g. [1], [2]), this work can benefit from pre-training utilizing these audio and natural language descriptions besides pre-training with AudioSet with arbitrary templating on labels. - On the downstream tasks, other than audio tagging/classification tasks, language-based audio retrieval is another highly relevant task which provide different perspective to the audio-text pre-training, consider adding those audio-text benchmarks [2] to the evaluation to expand the coverage in this work. [1] Elizalde, Benjamin, et al. "Clap learning audio concepts from natural language supervision." ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023. [2] Wu, Yusong, et al. "Large-scale contrastive language-audio pretraining with feature fusion and keyword-to-caption augmentation." ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - For the audio and text encoder selections, have you tried other options, for example ResNet/CNN based for audio and other text encoders such as BERT or RoBERTa? For the other systems to compare as shown in Table 2, they all utilize different encoder architectures, such ablation study can help improve and disentangle whether the improvement comes from proposed architecture versus encoder choices. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - This work is slightly limited in then evaluation with only audio tagging/classification tasks, it can benefit from adding more audio-text tasks involving longer form natural languages to provide more holistic views. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your valuable comments and feedback. Please refer to the attached PDF in the global response for supplementary results. __1. About the selected template__ Following the zero-shot performance of the strongest baselines in our experiments [1], we employ the template in Section 4.1 rather than simple concatenation. Please refer to Table 3 in [1] for a comparison of performance using different templates. Adopting this approach enables us to demonstrate that the performance improvement achieved by CoLLAT over CLAP is not attributed to the template selection. We will clearly cite [1] in the relevant section of the paper. We made the decision to not utilize natural language descriptions or a keyword-to-caption augmentation [2] in favor of simple templates for the following reasons: * The cross-modal token-level loss function in CoLLAT aims to explicitly map each token in the given text prompt with its corresponding counterpart in the audio tokens (i.e., the patches in the Mel-Spectrogram of the audio). It may not be feasible to find such a mapping for certain tokens in natural language descriptions (e.g., stop words, adjectives) with the audio tokens. Consequently, having complex text prompts could adversely impact the training of CoLLAT, particularly given the greedy algorithm we utilized to compute $g$ for mapping audio tokens with text tokens. * CoLLAT maintains the text encoder frozen during training, as it was pre-trained using text labels from the LAION dataset rather than natural language descriptions. Thus, attempting to produce text embeddings for natural language descriptions using the CLIP text encoder to train CoLLAT could introduce a data shift problem, which could also negatively impact the training of CoLLAT. * The scale of the selected training dataset, AudioSet, which consists of approximately 2 million audio and text label pairs, is significantly larger compared to datasets with audio and text captions. In fact, it is more than three times larger than the largest publicly available dataset of this kind, LAION-630K. But we acknowledge that these points should be validated with sufficient evidence; however, due to time constraints, we leave this validation as future work. [1] Elizalde, Benjamin, et al. "Clap learning audio concepts from natural language supervision." ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023. [2] Wu, Yusong, et al. "Large-scale contrastive language-audio pretraining with feature fusion and keyword-to-caption augmentation." ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023. __2. Results for several additional tasks__ Please find the results for text-to-audio retrieval, audio-to-text retrieval, and audio captioning using the AudioCaps dataset in Figure 1.b and Table 1 in the supplementary PDF in the global response. In the latest results, CoLLAT outperforms the baselines (by approximately 20% in SPICE for audio captioning, 13.4% in MAP for audio-to-text retrieval, and 6.7% in MAP for text-to-audio retrieval), demonstrating a similar performance trend as the previously reported results in the main paper. __3. Ablations with different encoder choices__ We present the results with different choices of audio encoders in Figure 1.b of the supplementary PDF in the rebuttal. Our observations show a slight performance improvement with the AST backbone when using a pretrained initialization. Additionally, we noticed that the AST backbone architecture converges faster compared to the other backbones. This could be attributed to the stronger inductive biases present in the other encoders as compared to AST [3]. Due to time constraints, we leave the performance evaluation with text encoders for future work. We anticipate the results to be consistent across different transformer-based architectures such as BERT, ROBERTA, and CLIP, provided they are pre-trained on the same contrastive objective as CLIP. However, we acknowledge that additional experiments are necessary to evaluate performance across different text encoders trained on various pre-training objectives (e.g., masked language modeling). Considering that most downstream applications, such as text-to-image generation, rely on the text conditioning from CLIP, we have chosen it as our main text encoder. This decision aligns with one of the primary motivations of this work, which is to cost-effectively introduce audio guidance to downstream applications that are built on text guidance from pre-trained text encoders. Please also note that the majority of baselines in our study adopt the same text encoder. [3] Gong, Yuan, Yu-An Chung, and James Glass. "Ast: Audio spectrogram transformer." arXiv preprint arXiv:2104.01778 (2021). --- Rebuttal Comment 1.1: Comment: Thank the authors for your responses and extra experiments in audio captioning and text to audio retrieval. It would be great to include these results in the final paper if possible to provide more thorough view of your work. I maintain my rating and advocate for accepting of this work. --- Reply to Comment 1.1.1: Comment: We thank you for your effort in reviewing the rebuttal. We will incorporate all of your comments into the final version of the paper.
Summary: This paper presents a novel approach to train audio embeddings grounded in text embeddings for audio classification. They propose a trainable audio embedding layer while keeping the pre-trained text embedding module frozen. Apart from the contrastive loss term that maximises similarity between corresponding audio and text embeddings, authors propose a loss term that exploits audio-text alignment at token level. They also propose a loss term that minimizes the distance between embedding obtained for clean vs noisy audio sample. The authors show that their model outperforms other SOTA models for audio classification and audio retrieval. Strengths: One of the key strengths of the paper is that they freeze the pre-trained text embedding module and pull the audio embedding closer to the text embeddings obtained from the frozen module. This enables their audio embedding module to convert audio waveforms to text embeddings which can be recognized by other models using the pre-trained text module. This enabled them to showcase the efficacy of their method for audio guided image generation using pre-trained diffusion models. The proposed loss terms are well motivated. They do extensive evaluations comparing their model with other baselines and show it beats them for the two tasks of audio classification and cross-modal retrieval. The ablation studies show case the benefit of each of the proposed modules. The audio guided image generation further shows the benefit of cross-modal token level alignment loss. Weaknesses: 1. Not enough details are provided to reproduce the experiments e.g. detail of their AST backbone for audio encoder, architecture details of token interaction module. 2. Functionality of the permutation module “g” in cross-modal token level alignment loss is not very clear (equation 1). Is the permutation “g” different for different training examples? Is it different for the same example across different epochs? What is the permutation used in inference time? Or do the authors not permute the token embedding generated from audio during inference? 3. For cross-modal token-level alignment loss authors do a permutation of the token level embeddings obtained from audio before matching them with text tokens. For a length N sequence, there would be N! ways the sequence can be arranged. It would seem that would lead to a significant increase in training time trying to find the permutation that leads to the lowest loss especially when done for all the examples in a batch for all epochs. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. For token interaction module, was the token sequence of length N provided as input (query) to first layer initialized randomly? 2. Learning rate of 0.1 for Adam seems high. Clap uses 1e-3 while AST used 5e-5. Did the authors see unstable training with such high learning rate? 3. Did authors try with loss terms minimizing the distance between token embedding from audio and actual text embeddings obtained from intermediate layers of the pre-trained text encoder? 4. Why did authors choose to use learned position embeddings? Previous research has shown that using fixed sinusoidal position embedding o rotary position embeddings can also work well with transformer architectures. Did authors try using fixed position embeddings? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Authors have addressed the concerns about identifying personal information from audio recording. they have also considered the possibility that one can potentially use audio guided image generation to generate human faces which can lead to privacy and ethical issues. In my opinion they have adequately addressed the ethical concerns. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your valuable comments and feedback. Please refer to the attached PDF in the global response for supplementary results. __1. Initialization of the input to the token interaction module__ All elements in the input token embedding matrix were randomly initialized, following a normal distribution with a mean of zero and a standard deviation of 1. We will provide a clearer explanation of this point in Line 195 of the main paper. __2. Learning rate__ We did not observe significant instabilities with the selected learning rate of 0.1, combined with the other optimal hyperparameters. During the initial iterations with a small learning rate, it was observed that the absolute values of parameter updates could be relatively small. This behavior was more noticeable when utilizing a later layer of the text encoder to compute the token-level loss functions. This behavior can be attributed to two factors: (1) the strong dependence of the optimal learning rate on the objective functions and the weights assigned to them (lambda values), rather than the architecture; and (2) the better pre-trained initialization of most components in CoLLAT. Please also note that the chosen learning rate serves as an upper bound. The learning rate is adaptively adjusted for each parameter when using the Adam optimizer. However, we observed instances of instabilities and encountered challenges related to reaching trivial solutions (e.g., averaged embedding as the token embeddings) when training CoLLAT solely with the cross-modal token alignment loss without applying $g$. __3. Results when different intermediate layers are used to compute the token-level alignment loss__ We present the results for this study in Figure 2.a. As can be see, we have identified that the first layer is the optimal layer for performing token-level alignment. This finding may be attributed to the following reasons: * As CoLLAT utilizes a frozen text encoder in both the text and audio encoding pipelines, performing cross-modal token-level alignment at an earlier layer of the frozen text encoder inherently aligns the token embeddings in subsequent layers as well. * The token embeddings from the intermediate layers in CLIP are more contextualized, resulting in less distinct embeddings for different tokens due to the information-sharing with their contextual surroundings. CoLLAT effectively leverage this by employing the first layer of CLIP (i.e., static embeddings of CLIP) to compute the cross-modal token-level loss function. This approach enhances the likelihood of achieving optimal reordering with our greedy reordering method to compute $g$, as the token embeddings show greater differences at the earliest layers. __4. Regarding positional embeddings__ The decision to adopt learnable positional embedding in CoLLAT was primarily motivated by AST [1], as we employed the pre-trained checkpoint of AST during our experiments. In our audio encoder, the patches of the audio Mel spectrogram are treated as tokens. Given the existence of long-distance and somewhat arbitrary spatial dependencies among such tokens, the adoption of learnable positional embedding is widely accepted in similar domains [1,2]. For our text encoder, we follow the same positional embedding technique as CLIP, as we keep our text encoder fixed during training. [1] Gong, Yuan, Yu-An Chung, and James Glass. "Ast: Audio spectrogram transformer." arXiv preprint arXiv:2104.01778 (2021). [2] Dosovitskiy, Alexey, et al. "An image is worth 16x16 words: Transformers for image recognition at scale." arXiv preprint arXiv:2010.11929 (2020). __5 More details about the architecture__ We adopt the same architecture as AST to define our audio encoder. The cross-attention block in the token-interaction module is architecturally similar to the cross-attention block proposed in [3]. Our CLIP text encoder is taken from the CLIP encoder in the Stable Diffusion V2 model. To ensure the paper is self-contained, we will include these specific details in the supplementary material of the final version. In the supplementary material of the rebuttal, we present the ablations (see Fig. 2b) that we conducted to determine the optimal number of cross-attention blocks in the token-interaction module. Following this study, we set the number of cross-attention blocks to 8. Additionally, in Figure 1.b of the supplementary material, we provide the results obtained using different audio encoders, which led us to select AST as our backbone architecture. [3] Chen-Wei Xie, Jianmin Wu, Yun Zheng, Pan Pan, and Xian-Sheng Hua. Token embeddings alignment for cross-modal retrieval. In Proc. of ACM-MM, 2022. __6 More details about the functionality of $g$__ The introduction of $g$ in the cross-modal token alignment loss allows CoLLAT to be agnostic to the order of classes within the text. As a result, CoLLAT's objective solely focuses on generating all the textual concepts from a given text prompt using the associated audio clip, without penalizing incorrect ordering. Since there are multiple ways to concatenate the text labels of an audio clip to form a text prompt, CoLLAT's token interaction module cannot predict the corresponding text tokens in the correct order. This is because the module does not have access to information regarding the specific order used to create the text prompt. Providing such information to the token interaction module can lead to inference-related problems such as which order should be used during inference. The permutation $g$ is different for different training examples and even for the same example across different epochs, especially during the early stages of training. This permutation is necessary only for computing the cross-model token alignment loss function and is not required for inference (obtaining the audio embedding). We agree with you regarding the complexity of $g$, which is N! for each instance, and it acts as the bottleneck in the training loop of CoLLAT. --- Rebuttal Comment 1.1: Comment: I thank the authors for their revision. The authors have adequately answered my questions and concerns. I also thank them for providing additional experiments in the supplementary material showing the results on AudioCaps and providing more ablation studies. I think the methodology is well motivated and it could find applications in areas where fine-tuning a big text encoder like CLIP would be bottleneck. Training time is however still a concern due to the permutation operation. I raise my review score from 5 to 6. --- Reply to Comment 1.1.1: Comment: We thank you for your effort in reviewing the rebuttal. We will incorporate all of your comments into the final version of the paper.
Summary: This paper introduces a method to train an audio encoder to produce audio embeddings that match text embeddings. Contrary to other methods, the authors suggest to keep the text encoder fixed so as to enable its use in other contexts. More specifically, a Transformer audio encoder produces audio embeddings. A second model produces audio-to-text embeddings with the same dimension as the reference text, which are re-ordered and matched with the text embeddings. This module allows the original audio embeddings to have a fine-grained correspondance with the text. Finally the outputs of the frozen text encoder applied to the reference text embeddings and the embeddings derived from the audio-to-text model are matched with a contrastive loss. The experiments show good results for audio classification tasks as well as cross-modal retrieval and audio-guided image generation. Strengths: - The paper is clear and convincing. The introduction and related work section are especially well written and clear, containing a good presentation of the challenges and contributions - The motivation to keep the pre-trained text encoder untouched is clear and interesting. The results are a good proof of the interest of this approach. The examples illustrate well the effectiveness of the method. - The idea to produce fine-grained correspondance between the audio embeddings and the text is reasonable and pretty good. - The experiments look sound and are convincing Weaknesses: - The presentation and notation could be more explicit at some places, for example: - Intro, 2nd paragraph: "as shown in fig 1b" : not clear what to look at in 1b. Maybe explain which one is the fine-tuned CLIP? - it looks like there is a formatting issue in 3.1 - in 3.3, the notation requires more explanation. It looks like the two S are transpose of each other. - Eqn. 2 requires more explanation. what is the (i, i) subscript? softmax along which dimension of S? - It would be nice to see some examples of classes/text in different datasets (maybe in supplementary material) to illustrate the complexity of datasets - Some parts could benefit from more explanations, for example: - How is the reordering $g$ computed? - How are the initial input defined in the denoising pipeline? - It would be interesting to have a deeper study and analysis of the results in Fig. 2. While reading, I expected to see the corresponding experiments and analysis in the experiment section. Minor: - l.107: "correspndence." - formatting issue in the text in Sec. 3.1 making the section hard to read and understand - Missing reference for AudioSet - Evaluation: missing "(3)" for the third task - Sec. "4.1 Datasets" does not only deal with datasets. There is not Sec. 4.2 Technical Quality: 3 good Clarity: 3 good Questions for Authors: 3.1: how are the initial input defined in the denoising pipeline? 3.2: greedy reordering: how much better would the method be if the optimal reordering was found? could the reordering be learned by the token interaction module, or why isn't it? How the order of the classes in the text and in the audio relate in the training set? 4.1: In the template, does the ordering of the classes matter? (e.g. if the classes appear one after the other in the audio) Sec. 5.1: Zero Shot: is the possible set of labels in the text encoder also matching the template presented in 4.1? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your valuable comments and feedback. Please refer to the attached PDF in the global response for supplementary results. __1. Unclarity about the fine-tuned CLIP models in Fig. 1b__ In Fig. 1b, CLAP, AudioCLIP, and Cons-CLAP models fine-tune the pre-trained CLIP text encoder. In contrast to CLAP and AudioCLIP, Cons-CLAP incorporates a constraint to alleviate catastrophic forgetting of CLIP's existing knowledge during training. This constraint penalizes the model if it significantly forgets existing knowledge. In the final version, we will clearly specify and highlight these points. __2. Formatting issue in Section 3__ Thank you so much for pointing out these issues. In Sec. 3.1, the notations should be corrected as bellow. We will correct these issues in the final version. ${x^a_i}{i=1}^M \rightarrow$ {$x^a_i$}$_{i=1}^M$ ${x^t_j}{j=1}^N \rightarrow$ {$x^t_j$}$_{j=1}^N$ ${h^a_i}{i=1}^M \rightarrow$ {$h^a_i$}$_{i=1}^M$ ${h^t_j}{j=1}^N \rightarrow$ {$h^t_j$}$_{j=1}^N$ __3. Explanation on $S$ in Section 3.3__ In Section 3.3, two $S$ matrices are $|B|\times |B|$ matrices that are transposes of each other, as you mentioned. Here, $|B|$ refers to the batch size. The $(i,i)$ subscript of the $S$ matrices represents the elements along the main diagonal. The elements along the main diagonal in $S$ represent the dot product similarity of the embeddings between corresponding audio and text pairs. We will make these facts clear in the final version of the paper. __4. Few examples from different datasets__ We will include a few examples from each dataset and detailed statistics related to complexity (such as annotator agreement) in the final supplementary material of the paper. __5. About how $g$ is computed__ $g$ aims to find the permutation of the token-level embeddings in {$h^{a-t}_j$}$_1^N$ that minimizes the L1-norm distance between the permuted audio token embeddings of an audio clip and the corresponding text token embeddings. $g$ is computed greedily given the time complexity of finding the optimal solution for $g$. Our greedy algorithm operates as follows: Given the token embeddings of {$h^{a-t}_j$}$_1^N$ and {$h^{t}_j$}$_1^N$, we initiate the process by starting from the right-most token embedding in {$h^{t}_j$}$_1^N$. This token is then paired with the closest token embedding in {$h^{a-t}_j$}$_1^N$. Next, we proceed to the second token embedding from the right in {$h^{t}_j$}$_1^N$, excluding the already mapped token embedding in {$h^{a-t}_j$}$_1^N$ from the possible set to be paired with the selected text token embedding. We iterate this process until we obtain the greedy one-to-one mapping between all the tokens in {$h^{a-t}_j$}$_1^N$ and {$h^{t}_j$}$_1^N$. We observed that this greedy approach could produce optimal re-ordering unless there are very similar classes in the same audio clip. Consequently, we do not anticipate a significant variation in the performance if the optimal reordering can be achieved, which we leave as future work given the time complexity of training the model with the optimal reordering. __6. Initialization of the input to the denoising pipeline in the token interaction module__ All the elements in the input matrix were randomly initialized by following a normal distribution with a mean of zero and a standard deviation of 1. __7. Impact of the order of classes in text prompts__ CoLLAT is agnostic to the order of the classes presented in the text. This is the primary motivation behind the introduction of $g$, which reorders the token embeddings in {$h^{a-t}_j$}$_1^N$ prior to computing the token alignment loss. With the aid of $g$, CoLLAT's objective is solely focused on producing all the textual concepts from a given text prompt using the corresponding audio clip, but incorrect ordering is not penalized. Consequently, the ordering of class in template in 4.1 does not matter. For a given audio clip with multiple labels, there are multiple ways to concatenate these labels together and generate the text prompt using the template described in Section 4.1. However, since we do not provide any information about the specific order utilized to create the text prompt to our token interaction module (providing such information to the token interaction module causes problems during inference), it is not possible for the module to learn the reordering process for a given (audio, text) pair by itself. This is the main motivation to make CoLLAT agnostic to the order of the classes in the text prompt. __8. Impact of the oder of sound events in audio clips__ Due to cross-attention mechanism in CoLLAT's token interaction module, CoLLAT remains agnostic to the order of classes present in the audio. Even in cases where classes overlap within the same time period, CoLLAT can accurately map them to their corresponding text counterparts if their frequencies differ. This capability is possible because CoLLAT's audio encoder treats each patch in the Mel-spectrogram of an audio clip as a distinct audio token. __9. About Zero Shot setting__ Yes, the possible set of labels is also processed using the same template as described in Section 4.1 (_This is a sound of [class label]_) before being inputted into the text encoder to obtain the corresponding text embeddings.
Summary: The paper introduces CoLLAT, an audio-language framework that makes use of a pre-trained language model. The framework is trained with a contrastive objective which encourages learning of fine-grained audio-text grounding. The paper presents very strong results for diverse downstream tasks, such as zero-shot audio classification, audio-image retrieval, and audio-guided image generation. Strengths: The proposed framework leverages a locked pre-trained language model (CLIP text encoder) as audio and text encoder. It learns to map the audio input to input tokens for the frozen text encoder, which retains the strong capabilities of the language model. The model is trained to encourage very fine-grained understanding of audio. It achieves very convincing results on various downstream tasks. Weaknesses: Given that this paper claims to yield fine-grained audio-to-text grounding, I believe that it would be good to show results for audio-text retrieval on CLOTHO, AudioCaps, and SoundDescs [A,B]. One of the main model contributions, the token interaction module, should be explained better and in more detail. For instance, it is not mentioned how many blocks are used (l.192: “multiple cross-attention-based blocks”). In addition, it would be good to add a detailed explanation on the denoising pipeline in this module. Furthermore, different components of this block should be ablated in the model ablation study. Similarly the paper does currently not contain a model ablation for 1) the use of the same encoder for audio and text instead of using a different audio model, 2) the effect of using pre-trained encoders. These should be added to validate the architectural choices. Overall, I think the paper is interesting, and I would be happy to consider raising my score if the missing evaluations and ablations are added and convincing. [A]: Oncescu et al.: Audio Retrieval with Natural Language Queries, INTERSPEECH 2021 [B] Koepke et al.: Audio Retrieval with Natural Language Queries: A benchmark study, Transactions on Multimedia 2022 Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: * Is CLAP shown in Figure 5 referring to [6]? If not, different naming should be used. * I believe that some relevant prior works should be referenced, e.g. [C,D,E]. * What is the computational cost to train the framework and what kind of hardware was used for training? [C]: Wu et al: LARGE-SCALE CONTRASTIVE LANGUAGE-AUDIO PRETRAINING WITH FEATURE FUSION AND KEYWORD-TO-CAPTION AUGMENTATION, ICASSP 2022 ​​[D]: Deshmukh et al.: Audio retrieval with wavtext5k and CLAP training, 2022 [E]: Wu et a.: Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation, ICASSP 2023 * Typos: l.62: lost -> lose l.136 alignemnt -> alignment l.181: space missing after “CLIP.” l.232: We adopt AudioSet dataset -> We adopt the AudioSet dataset Figure 4 is a table and should be labelled accordingly. What does t in $128x100 t$ in l.240 refer to? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your valuable comments and feedback. Please refer to the attached PDF in the global response for supplementary results. __1. Results for several additional tasks.__ Please find the results for text-to-audio retrieval, audio-to-text retrieval, and audio captioning using the AudioCaps dataset in Figure 1.b and Table 1 in the supplementary PDF in the global response. In the latest results, CoLLAT outperforms the baselines (by approximately 20% in SPICE for audio captioning, 13.4% in MAP for audio-to-text retrieval, and 6.7% in MAP for text-to-audio retrieval), demonstrating a similar performance trend as the previously reported results in the main paper. Due to time constraints, we will consider collecting results with CLOTHO and SoundDescs as future work. Please take note that the experimental setup for the zero-shot column in the audio classification results table in the main paper is similar to the audio-to-text retrieval, as it solely depends on the similarity of embeddings between text and audio to identify corresponding pairs, without utilizing any downstream classifier as in the LP setting. Therefore, the zero-shot audio classification results can also be considered as supplementary results for audio-to-text retrieval. __2. More ablations.__ In the supplementary PDF of the rebuttal, we provide additional ablations with different audio encoder choices (refer to Fig. 1.b), varying numbers of cross-attention blocks (refer to Fig. 2.b), and distinct layer selections for token-level loss computation (refer to Fig. 2.a). These ablations were conducted to guide different design decisions in CoLLAT. In our final architecture, we have incorporated 8 cross-attention blocks within the token-interaction module. We will clearly mention all these hyperparameters in the final version. __2.1. Impact of using the same encoder for audio and text instead of using a different audio model__ CoLLAT's architecture adopts the same frozen text encoder in the audio encoding pipeline (refer to Fig. 3 in the main paper). Considering the nature of the token-level loss functions, if we employ completely different models (without any parameter sharing) for audio and text encoding, it needs to train an audio encoder that matches the scale of CLIP (with over 1B parameters) for audio encoding. This learning process is resource-intensive, especially given the added complexity of $g$ in the cross-modal token-level loss function. To tackle this challenge, CoLLAT proposes a cost-effective alternative by efficiently sharing the parameters of a pre-trained text encoder within its audio encoding pipeline. With this architecture, our audio encoder can effectively utilize all the language understanding capabilities of the pre-trained text encoder when processing audio. This may explain why CLIP achieves significantly superior results in tasks like audio captioning compared to the baseline, which employs different models for audio and text encoding. __2.2. Impact of using pre-trained encoders__ Adopting pre-trained encoding heads to initialize audio encoders has been shown to be useful in previous similar works (e.g., AudioCLIP). We observe a similar trend for the ESC-50 and FSD-50K datasets with CoLLAT, which is reported in Fig. 1.b of the supplementary PDF. We also noticed that pre-trained initialization improves the convergence speed. __3. Differences between CLAP and the model used in Figure 5__ To effectively utilize the denoising diffusion pipeline for image generation as a readily available solution without re-training, two key requirements need to be met: (1) the text encoder must remain frozen during the audio encoder training, and (2) the audio encoder should accurately predict token embeddings of the corresponding text prompt for a given audio clip. This is mainly because the selected denoising diffusion model accepts token-level embeddings of either the text prompt or audio clip to guide the image generation process. To fulfill these requirements, we adopt CoLLAT's architecture and CLAP's training objective to train the baseline model denoted as CLAP in Fig. 5. For further details, please refer to lines 297 to 305 in the main paper. In the final version, we will ensure that different naming is used for this baseline model. __4. About the missing citations__ Thank you for pointing out these missing citations. It seems that the same work was mistakenly pasted for both [C] and [E]. We would greatly appreciate it if you could let us know if you have found two different works that should be cited for [C] and [E]. We will ensure that all these works are properly cited in the final version. __5. Computational cost and hardware__ We utilize a GPU cluster comprised of 8 V100 GPU cards for training CoLLAT. With this hardware configuration, it takes approximately 320 GPU hours to train the model on AudioSet using the optimal hyperparameter setting. We will include these details in the main paper. __6. Formatting issues__ Thank you for pointing the formatting issues and typos. We will correct all of these issues in the final version. __7. Meaning of $t$ in $128x100t$__ Here, t is the time duration of the audio clip in seconds. We will define this notation clearly in the final version. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for clarifying my doubts and for providing additional convincing experiments, in particular the additional ablations and tasks. Therefore, I vote to accept the paper. However, I urge the authors to incorporate all the additional experiments and clarifications in a final version of the paper. Additionally, the state-of-the-art performances should be included in the tables. In particular, Table 1a in the additionally provided pdf should include the sota results as an upper bound along with reporting not only R@10 but also other commonly reported metrics (e.g. R@1,.. etc). --- Reply to Comment 1.1.1: Comment: We thank you for your effort in reviewing the rebuttal. We will incorporate all of your comments into the final version of the paper.
Rebuttal 1: Rebuttal: We thank all the reviewers for their valuable comments and feedback. Kindly find the attached PDF in this global response for additional results that support our rebuttal. Pdf: /pdf/c1ece049e6cb2e9c913c84397c2fddc84ef3c305.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null